Creating a helm chart in ICP

Recently I am doing some performance test with container clusters. And I am using ICP as my environment.  I have recorded the steps below:

  1. Prepare a docker image that has perf installed.
    1. On a test VM I created a docker instance from centos7 base image and installed with “sudo yum install perf gawk -y” on top.
    2. Then i commit it as a new docker image.
    3. Finally I exported this new docker image as perf.tar file.
  2.  Transfer the perf.tar to the ICP master node and import it into local docker repo.
    1. sudo bx pr login -a https://<Your-ICP-Cluster>:8443 –skip-ssl-validation
    2. sudo docker login mycluster.icp:8500
    3. sudo docker import perf.tar ppc64le/perf:v1
    4. sudo docker tag ppc64le/perf:v1 mycluster.icp:8500/default/perfbench:v1
    5. sudo docker push mycluster.icp:8500/default/perfbench:v1
    6. sudo kubectl -n default edit image perfbench
          Modify scope to global so that it can be deployed to other namespaces.
  3. Now we will create a new helm chart
    1. sudo helm create perfbench
      1. modify value.yaml and template/deployment.yaml to match your docker image properties.
    2. Package it into a helm chart
      1. Use sudo helm lint perfbench to validate if there is any syntax errors
      2. sudo helm package perfbench
      3. And the result will be perfbench-0.1.0.tgz
    3. Load it into your k8s cluster
      1. sudo bx pr load-helm-chart –archiveperfbench-0.1.0.tgz
  4. Now we will see perfbench in ICP catalog UI

Deploy RDO Pike on CentOS 7.4/ppc64le

  • System preparation

  • $ sudo systemctl disable firewalld
    $ sudo systemctl stop firewalld
    $ sudo systemctl disable NetworkManager
    $ sudo systemctl stop NetworkManager
    $ sudo systemctl enable network
    $ sudo systemctl start network

    Prepare the repo

    $ sudo yum install -y epel-release

And you need to add the following into your repo:

[root@host-9-114-111-215 centos]# cat /etc/yum.repos.d/rdo.repo 


name=RDO Pike





name=RDO Pike Dependency for PowerLinux




After that, follow the standard RDO installation practices.

$ sudo yum repolist

$sudo yum update -y

$sudo yum install -y openstack-packstack

  • RDO installation via Packstack

    $packstack --allinone --default-password=<your passcode> --os-ceilometer-install=n  --os-aodh-install=n --os-gnocchi-install=n




Run CentOS 7.2/ppc64le as a standard KVM host

By default, CentOS 7.x has the KVM virtualization support in kernel. However, there are a few user-space packages missing.

Assuming you have a CentOS 7.2/ppc64le installed on a PowerLinux server already. You first need to configure a new repo to add in the missing user-space packages:

$ sudo vi /etc/yum.repos.d/virt.repo


name=CentOS/RHEL-$releasever - Virt




name=OpenStack Mitaka Dependency Repository




$ sudo yum repolist

$ sudo yum install gperf-tools-lib qemu-img-ev-2.3.0 qemu-kvm-common-ev qemu-kvm-ev -y

Then we need to play a trick by softlinking /usr/libexec/qemu-kvm to /usr/bin/qemu-system-ppc64, otherwise virsh will fail on certain commands.

$ sudo ln -s /usr/libexec/qemu-kvm /usr/bin/qemu-system-ppc64

Next, in case you have a VM image and VM configuration XML dump that were created on PowerKVM 3.x, you need to manually modify XML configuration:

<type arch=’ppc64le’ machine=’pseries-rhel7.2.0‘>hvm</type>

After completing the above steps, you should be able to use familiar CLIs such as virsh to manage your KVM virtualization environment the same way as on x86.

Run OpenStack Control Plane on PowerLinux


Assuming you have CentOS 7.2 or RHEL 7.2 installed on a IBM Power8 or Open Power server.

Repository Configuration

  • RHEL
    • Configure RHEL with RedHat OS repo and Optional repo. You need a valid RedHat subscription for this to work.
  • CentOS
    • Just base repo shall be fine. No EPEL repo is needed.

In addition, the following repositories should be added:

name=OpenStack Mitaka Repository

The above is the standard RDO Mitaka repository for CentOS/RHEL. We will reuse the non-arch python packages in this repository.

name=CentOS/RHEL-$releasever - Virt 

name=OpenStack Mitaka Dependency Repository

The above two repositories contain the dependencies packages for ppc64le that enables the KVM virtualization management as well as other OpenStack services.


Follow the standard PackStack based installation procedures from step 2



Manage CentOS 7.2/ppc64le with OpenStack


Assuming you have a Power server with CentOS 7.2 installed, and you have setup a RDO cloud controller on CentOS/RHEL 7.x on x86 with Packstack already.


1) Prepare the Power compute node

  • Modify the RDO repository file ( /etc/yum.repos.d/rdo-release.repo )

Change baseurl to

We will reuse the RDO no-arch packages on ppc64le.

  • Add the OpenStack dependencies repo for ppc64le

       #vi /etc/yum.repos.d/rdo-deps.repo

name=CentOS-$releasever – Virt

name=OpenStack Mitaka Dependency Repository

  • Update the system

#yum update -y

2) Make changes on RDO Controller on x86

  • Edit the Packstack answer file to add in the target Power node:

# List the servers on which to install the Compute service.
CONFIG_COMPUTE_HOSTS=…, <ip of the new Power host>

# Comma-separated list of servers to be excluded from the
# installation. This is helpful if you are running Packstack a second
# time with the same answer file and do not want Packstack to
# overwrite these server’s configurations. Leave empty if you do not
# need to exclude any servers.
EXCLUDE_SERVERS=<all the existing nodes in the cloud topology>

For example, if you have <controller> and <node1>, <node2> in your current cloud, now you want to add a new Power node into this RDO topology. So the packstack answer file should be changed as below:



So that when running Packstack, it will only install compute service on node 3.

  • Run packstack to start the deployment.

packstack –answer-file=<path-to-your-answerfile>

It will take about 10 minutes for the deployment to complete.

3) Verify that the Power host has been added into the existing RDO cloud

From Horizon dashboard, you can view the status of this newly added compute host.

Other scenarios

If your existing cloud is not RDO cloud or not deployed with Packstack, you will not be able to deploy the compute service to the Power host with Packstack. However, you can still follow the community OpenStack installation guide to install compute services and neutron networking agents on your Power host after you finish step 1).


How to fix the autoboot issue with PowerLinux

The Problem

I have configured petitboot to boot PowerKVM 3.1 installed on /dev/sda2, but even after rebooting, it will stays on the petitboot menu. I have to manually select the boot entry to boot into PowerKVM.

The environment

System type: 8284-22A     FW830.00 (SV830_023)

Petitboot Logs

cat /var/log/petitboot/pb-discover.log
--- pb-discover ---
Detected platform type: powerpc
Running command:
exe:  nvram
argv: 'nvram' '--print-config' '--partition' 'common'
autoboot: enabled, 3 sec
network configuration:
  interface 6c:ae:8b:6a:74:14
  dns server
  boot device 070a7d69-b69d-4870-851f-3956ac94e41a
boot priority order:
    network: 0
        any: 1
  IPMI boot device 0x01 (persistent)
language: en_US.utf8

Root Cause

So the root cause is the boot configuration conflict. Someone has set it to boot from a network via IPMI command. And unfortunately when i configure it to boot from /dev/sda2, petitboot did not give me a warning. (This has been fixed in FW840)

Run the following ipmi command will prove the above analysis:

Get System Boot Options- NetFn = Chassis (0x00h), CMD = 0x09h

Response Data
0x00: No override
0x04: Force PXE
0x08: Force boot from default Hard-drive
0x14: Force boot from default CD/DVD
0x18: Force boot into BIOS setup

# ipmitool -I lanplus -H <fsp_ip> -P <passwd> raw 0x00 0x09 0x05 0x00 0x00

 01 05 c0 04 00 00 00

Here 0x04 indicates a boot override (Force PXE boot) . That is why it won’t auto boot to /dev/sda2.

Or we can use the following IPMI command for simplicity:

# ipmitool -I lanplus -H <fsp_ip> -P <passwd> chassis bootparam get 0x05
Boot parameter version: 1
Boot parameter 5 is valid/unlocked
Boot parameter data: c004000000
 Boot Flags :
 - Boot Flag Valid
 - Options apply to all future boots
 - BIOS PC Compatible (legacy) boot
 - Boot Device Selector : Force PXE
 - Console Redirection control : System Default
 - BIOS verbosity : Console redirection occurs per BIOS configuration setting (default)
 - BIOS Mux Control Override : BIOS uses recommended setting of the mux at the end of POST

The Fix

Set IPMI Boot Device to none:

# ipmitool -I lanplus -H <fsp_ip> -P <passwd> chassis bootdev none

After that, rebooting PowerLinux server will boot into /dev/sda2 automatically.

Update firmware to FW840

If the fw is updated to FW840, it will give you a warning on petitboot when you configure auto boot.

We can follow this link to update PowerLinux FW.



Deploy kubernetes on CentOS 7.x/ppc64le with Flannel

If we want to deploy a multi-node kubernetes on CentOS 7.x with flannel on x86, then we can follow the steps in this blog.

Below are the steps to add a CentOS 7.x/ppc64le as a kubernetes node.

Installation Steps

1) Configure yum repo for the required packages

sudo vi /etc/yum.repos.d/docker.repo

sudo vi /etc/yum.repos.d/docker-misc.repo

sudo vi /etc/yum.repos.d/at.repo
name=IBM Advanced Toolchain

2) Update the system and install ntpd as well

sudo yum update -y
sudo yum install ntp -y
sudo systemctl start ntpd

3) Install docker

sudo yum install docker-io -y

sudo mkdir -p /etc/systemd/system/docker.service.d
sudo vi /etc/systemd/system/docker.service.d/override.conf
ExecStart=/usr/bin/docker daemon --storage-driver=overlay $DOCKER_NETWORK_OPTIONS -H fd://

sudo systemctl daemon-reload
sudo systemctl enable docker

4) Install flannel

sudo yum install advance-toolchain-at9.0-devel -y
sudo yum install flannel -y

sudo vi /etc/sysconfig/flanneld

sudo systemctl enable flanneld
sudo systemctl start flanneld
sudo systemctl start docker

Here we must install advance-toolchain explicitly, otherwise flanneld won’t start.

5) Install kubernetes node

sudo yum install kubernetes-node -y
sudo vi /etc/kubernetes/config

sudo vi /etc/kubernetes/kubelet

# You may leave this blank to use the actual hostname

# location of the api-server

6) Start and enabled all the services.

for SERVICES in kube-proxy kubelet; do
    sudo systemctl restart $SERVICES
    sudo systemctl enable $SERVICES

Verify that kubernetes node is correctly installed on CentOS 7.x/ppc64le

From your kubernetes master node, run

kubectl get nodes

And check that your centos 7.x/ppc64le host is listed in Ready status.

Sample output:

[centos@mengxd-master ~]$ kubectl get nodes
NAME           STATUS     AGE
mengxd-node1   Ready      13d
mengxd-node2   Ready      43s




Creating centos/ppc64le cloud image


CentOS 7.2/ppc64le has been released for months, but there is no official centos cloud images  for Power users on .

Although there are many posts on the Internet about how to create a cloud image,  there are still a few caveats we need to pay attention to.

Below are the steps I took to create a centos 7.2 cloud image for ppc64le.


1) Prepare a Power server with KVM virtualization enabled and Internet access.

This can be IBM PowerLinux servers with PowerKVM or other KVM such as Ubuntu KVM(OpenPower servers with Ubuntu/Fedora KVM also work).

2) Download CentOS/ppc64le ISO image to /data/isos folder on your Power server.

A net-install iso image is not used,  because net install makes it difficult to customize the root disk partition layout.

3) Prepare a 5GB qcow2 disk file

# qemu-img create -f qcow2 /tmp/centos7.qcow2 5G

CentOS VM Installation

1)  Start the installation

# virt-install --virt-type kvm --name centos7 --ram 1024 \
  --disk /tmp/centos7.qcow2,format=qcow2 \
  --network network=default \
  --graphics vnc,listen= --noautoconsole \
  --os-type=linux --os-variant=rhel7 \

After the installation process started, use VNC client to finish the following installation steps. If installing from a PowerKVM host, we can use the kimchi web interface to get VNC console to the current VM.

屏幕快照 2016-06-14 09.43.11

2) Configure the disk partition layout manually

屏幕快照 2016-06-14 09.49.06

Choose “Standard Partition” scheme and create two partitions.

First create a 8MB preboot partition

屏幕快照 2016-06-14 09.53.55

Then create the 2nd partition as “/”. (Leave capacity field blank will use up all remaining space of this disk)

屏幕快照 2016-06-14 10.00.10

And the result partition layout is as below

屏幕快照 2016-06-14 10.04.55

Press “Done” to confirm the partition layout and ignore the warnings. Accept changes to confirm the partition layout on the disk.

3) Turn on Ethernet.

屏幕快照 2016-06-14 10.09.52


4) Step through the installation with default settings for other configuration items.

During the installation, create a “centos” user as admin.

屏幕快照 2016-06-14 10.15.16

Ignore the “failed to write boot loader configuration” warning and continue with installation:

屏幕快照 2016-06-14 10.21.20

5) reboot VM once installation completed, and then use “virsh dumpxml <vm-image>” command to determine the right cdrom device name.

# virsh dumpxml centos7

    <disk type='block' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <target dev='sdb' bus='scsi'/>
      <alias name='scsi0-0-0-1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    <controller type='usb' index='0'>

And run the following command to detach it with an empty CDROM.

# virsh attach-disk --type cdrom --mode readonly centos7 "" sdb
# virsh reboot centos7

Run the first command line twice if you see any error like this:

internal error: unable to execute QEMU command 'eject': Device 'drive-scsi0-0-0-1' is locked

Post-installation customization

1) Login to CentOS VM and update system with “#yum update -y”

2) Configure EPEL repo

# yum install -y

3) Install the required packages for a cloud image

# yum install cloud-init cloud-utils cloud-utils-growpart -y

4) Modify /etc/cloud/cloud.cfg for a “standard” centos cloud image.

    name: centos
    lock_passwd: true
    gecos: Cloud User
    groups: [wheel, adm, systemd-journal]
    sudo: ["ALL=(ALL) NOPASSWD:ALL"]
    shell: /bin/bash
  distro: rhel

5) Disable zeroconf

# echo "NOZEROCONF=yes" >> /etc/sysconfig/network

6) Configure console

Edit the /etc/default/grub file and configure the GRUB_CMDLINE_LINUX option:

GRUB_CMDLINE_LINUX="crashkernel=auto console=tty0 console=ttyS0,115200n8"

Save changes

# grub2-mkconfig -o /boot/grub2/grub.cfg

7) Disable firewall service by default

# systemctl stop firewalld
# systemctl disable firewalld

8) Shutdown VM instance

# shutdown -h now

9) Remove MAC address (Running from your KVM host OS)

# virt-sysprep -d centos7

10) Compress the cloud image

# virt-sparsify --compress /tmp/centos7.qcow2 \

Finally the cloud image is ready. You can upload the image to your OpenStack cloud now.