Install LBaaSv2 with RDO Pike

Install the packages

yum install openstack-neutron-lbaas openstack-neutron-lbaas-ui haproxy -y

Add the LBaaS v2 service plug-in to the service_plugins configuration directive in /etc/neutron/neutron.conf. The plug-in list is comma-separated:

service_plugins = [existing service plugins],neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2

Add the LBaaS v2 service provider to the service_provider configuration directive within the [service_providers]section in /etc/neutron/neutron_lbaas.conf

LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default

Select the driver that manages virtual interfaces in /etc/neutron/lbaas_agent.ini

[DEFAULT]
device_driver = neutron_lbaas.drivers.haproxy.namespace_driver.HaproxyNSDriver
interface_driver = INTERFACE_DRIVER
[haproxy]
user_group = haproxy

Replace INTERFACE_DRIVER with the interface driver that the layer-2 agent in your environment uses. For example, openvswitch for Open vSwitch or linuxbridge for Linux bridge (neutron.agent.linux.interface.BridgeInterfaceDriver).

Run the neutron-lbaas database migration:

neutron-db-manage --subproject neutron-lbaas upgrade head

In /etc/neutron/services_lbaas.conf, add the following to the [haproxy]section:

user_group = haproxy

Comment out any other device driver entries.

systemctl enable neutron-lbaasv2-agent.service
systemctl start neutron-lbaasv2-agent.service
systemctl status neutron-lbaasv2-agent.service
systemctl restart neutron-server.service
systemctl status neutron-server.service
neutron agent-list

neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.

+--------------------------------------+----------------------+-----------+-------------------+-------+----------------+---------------------------+

| id                                   | agent_type           | host      | availability_zone | alive | admin_state_up | binary                    |

+--------------------------------------+----------------------+-----------+-------------------+-------+----------------+---------------------------+

| 2f89da1c-9633-47b5-936c-3c195b4c278a | Linux bridge agent   | pkvmci858 |                   | 🙂   | True           | neutron-linuxbridge-agent |

| bf37205b-c304-4e87-b375-da953b06853b | Metadata agent       | pkvmci858 |                   | 🙂   | True           | neutron-metadata-agent    |

| c4ecb0ca-03e1-404b-8307-b5479a802736 | L3 agent             | pkvmci858 | nova              | 🙂   | True           | neutron-l3-agent          |

| dda9b8f4-bbb7-49b8-a536-fd8f356057fd | Loadbalancerv2 agent | pkvmci858 |                   | 🙂   | True           | neutron-lbaasv2-agent     |

| f6f9b5d9-b9b0-462d-acac-484743619ebf | DHCP agent           | pkvmci858 | nova              | 🙂   | True           | neutron-dhcp-agent        |

+--------------------------------------+----------------------+-----------+-------------------+-------+----------------+---------------------------+

Modify OpenStack Dashboard to enable LBaaSv2

vi /etc/openstack-dashboard/local_settings

OPENSTACK_NEUTRON_NETWORK = {

    'enable_router': True,

    'enable_quotas': False,

    'enable_ipv6': False,

    'enable_distributed_router': True,

    'enable_ha_router': True,

    'enable_fip_topology_check': False,

    'enable_lb': True,
#Restart HTTPD and Memcached to activate the LBaaSv2 support
systemctl restart httpd memcached

 An example usage

neutron lbaas-loadbalancer-create --name test-lb selfservice
#"selfservice" is the subnet name.
neutron lbaas-loadbalancer-show test-lb
#In my example, it will show 172.16.1.10 has been used as VIP.
neutron security-group-create lbaas
neutron security-group-rule-create \
  --direction ingress \
  --protocol tcp \
  --port-range-min 80 \
  --port-range-max 80 \
  --remote-ip-prefix 0.0.0.0/0 \
  lbaas
neutron security-group-rule-create \
  --direction ingress \
  --protocol tcp \
  --port-range-min 443 \
  --port-range-max 443 \
  --remote-ip-prefix 0.0.0.0/0 \
  lbaas
neutron security-group-rule-create \
  --direction ingress \
  --protocol icmp \
  lbaas
neutron port-update \
  --security-group lbaas \
  9f8f8a75-a731-4a34-b622-864907e1d556
#Here 9f8f8a75-a731-4a34-b622-864907e1d556 is vip_port_id from lbaas-loadbalancer-show

Adding an HTTP listener

neutron lbaas-listener-create \
  --name test-lb-http \
  --loadbalancer test-lb \
  --protocol HTTP \
  --protocol-port 80
neutron lbaas-pool-create \ --name test-lb-pool-http \ --lb-algorithm ROUND_ROBIN \ --listener test-lb-http \ --protocol HTTP
#Create two VMs each running a HTTP service on port 80.
#Assuming the two VMs having 172.16.1.4 and 172.16.1.10 as VM IP.
neutron lbaas-member-create   --name test-lb-http-member-1   --subnet selfservice   --address172.16.1.4   --protocol-port 80   test-lb-pool-http
neutron lbaas-member-create   --name test-lb-http-member-2   --subnet selfservice   --address 172.16.1.10   --protocol-port 80   test-lb-pool-http

Now, you shall be able to access HTTP service from VIP. (You can play around by stopping the member VMs to examine the behavior).

If you run “sudo ps aux |grep haproxy” you will notice there is a process like below:

nobody   58292  0.0  0.0  13120  5248 ?        Ss   05:45   0:00 haproxy -f /var/lib/neutron/lbaas/v2/62c54dc1-f32e-4775-9a33-e606cc51163f/haproxy.conf -p /var/lib/neutron/lbaas/v2/62c54dc1-f32e-4775-9a33-e606cc51163f/haproxy.pid -sf 53978

And /var/lib/neutron/lbaas/v2/62c54dc1-f32e-4775-9a33-e606cc51163f/haproxy.conf contains the haproxy configuration for the above HTTP loadbalancer:

cat /var/lib/neutron/lbaas/v2/62c54dc1-f32e-4775-9a33-e606cc51163f/haproxy.conf

# Configuration for test-lb

global

    daemon

    user nobody

    group haproxy

    log /dev/log local0

    log /dev/log local1 notice

    maxconn 2000

    stats socket /var/lib/neutron/lbaas/v2/62c54dc1-f32e-4775-9a33-e606cc51163f/haproxy_stats.sock mode 0666 level user

defaults

    log global

    retries 3

    option redispatch

    timeout connect 5000

    timeout client 50000

    timeout server 50000

frontend c0e5371e-c304-4cf6-af56-532b7a7ae212

    option tcplog

    option forwardfor

    bind 172.16.1.8:80

    mode http

    default_backend 1e0b0659-dcb5-415a-9115-501aa7b545fa

backend 1e0b0659-dcb5-415a-9115-501aa7b545fa

    mode http

    balance roundrobin

    server bec52ec4-b09c-4610-8476-b773bc22a007 172.16.1.10:80 weight 1

    server 5d0b9602-012d-4407-ad37-d155c4f9ecbf 172.16.1.4:80 weight 1

You can add a health monitor so that unresponsive servers are removed from the pool:

neutron lbaas-healthmonitor-create \
  --name test-lb-http-monitor \
  --delay 5 \
  --max-retries 2 \
  --timeout 10 \
  --type HTTP \
  --pool test-lb-pool-http

Notice the haproxy configuration file changes after applying the health monitor

cat /var/lib/neutron/lbaas/v2/62c54dc1-f32e-4775-9a33-e606cc51163f/haproxy.conf

# Configuration for test-lb

global

    daemon

    user nobody

    group haproxy

    log /dev/log local0

    log /dev/log local1 notice

    maxconn 2000

    stats socket /var/lib/neutron/lbaas/v2/62c54dc1-f32e-4775-9a33-e606cc51163f/haproxy_stats.sock mode 0666 level user

defaults

    log global

    retries 3

    option redispatch

    timeout connect 5000

    timeout client 50000

    timeout server 50000

frontend c0e5371e-c304-4cf6-af56-532b7a7ae212

    option tcplog

    option forwardfor

    bind 172.16.1.8:80

    mode http

    default_backend 1e0b0659-dcb5-415a-9115-501aa7b545fa

backend 1e0b0659-dcb5-415a-9115-501aa7b545fa

    mode http

    balance roundrobin

    timeout check 10s

    option httpchk GET /

    http-check expect rstatus 200

    server bec52ec4-b09c-4610-8476-b773bc22a007 172.16.1.10:80 weight 1 check inter 5s fall 2

    server 5d0b9602-012d-4407-ad37-d155c4f9ecbf 172.16.1.4:80 weight 1 check inter 5s fall 2

How to remove a neutron network

[root@ci858 ~]# neutron net-delete selfservice

neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.

Unable to complete operation on network d3a504ea-700a-4d95-b57d-b626d7c1678f. There are one or more ports still in use on the network.

Neutron server returns request_ids: ['req-eac5dbeb-5775-4b51-9603-67f4b650c879']


[root@ci858 ~]# neutron subnet-delete selfservice

neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.

Unable to complete operation on subnet d94e928b-e959-495b-8a9d-aa5babd91015: One or more ports have an IP allocation from this subnet.

Neutron server returns request_ids: ['req-8b394cb9-1efa-43ad-a06c-f37da1ae4d65']
[root@ci858 ~]# neutron router-delete router

neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.

Router 71a2319c-d82d-4ceb-aac3-6d42c3ff77cd still has ports

Neutron server returns request_ids: ['req-f6c37d9a-bb8a-4f68-a658-2b3ba528ed03']

We just cannot easily remove a network from Horizon or CLI. Below is the correct sequence we have to follow:

#Make sure no VMs connected to this network first!!!
#You can run neutron port-list/port-delete to delete
#the attached ports first. 

#neutron router-gateway-clear <Router>

    (check name with neutron router-list)

#neutron router-interface-delete <Router> <SubNet>

#neutron router-delete <Router>

#neutron subnet-delete <SubNet>

#neutron net-delete <Net>

Docker configuration on CentOS

Configure log-driver

  • sudo vi /etc/sysconfig/docker

Configure current user to run docker without sudo

  • sudo groupadd docker
    sudo usermod -aG docker $USER
    sudo systemctl enable docker.service
    sudo systemctl restart docker.service
    Then exit current shell and login with current user again

Related configuration files

  • /etc/docker/daemon.json
    /etc/sysconfig/docker
    /etc/systemd/system/multi-user.target.wants/docker.service
    

Creating a helm chart in ICP

Recently I am doing some performance test with container clusters. And I am using ICP 2.1.0.3 as my environment.  I have recorded the steps below:

  1. Prepare a docker image that has perf installed.
    1. On a test VM I created a docker instance from centos7 base image and installed with “sudo yum install perf gawk -y” on top.
    2. Then i commit it as a new docker image.
    3. Finally I exported this new docker image as perf.tar file.
  2.  Transfer the perf.tar to the ICP master node and import it into local docker repo.
    1. sudo bx pr login -a https://<Your-ICP-Cluster>:8443 --skip-ssl-validation
    2. sudo docker login mycluster.icp:8500
    3. 
      
      sudo docker import perf.tar ppc64le/perf:v1
    4. 
      
      sudo docker tag ppc64le/perf:v1 mycluster.icp:8500/default/perfbench:v1
    5. 
      
      sudo docker push mycluster.icp:8500/default/perfbench:v1
    6. 
      
      sudo kubectl -n default edit image perfbench
          Modify scope to global so that it can be deployed to other namespaces.
  3. Now we will create a new helm chart
    1. sudo helm create perfbench
      1. modify value.yaml and template/deployment.yaml to match your docker image properties.
    2. Package it into a helm chart
      1. Use sudo helm lint perfbench to validate if there is any syntax errors
      2. sudo helm package perfbench
      3. And the result will be perfbench-0.1.0.tgz
    3. Load it into your k8s cluster
      1. sudo bx pr load-helm-chart --archive perfbench-0.1.0.tgz
  4. Now we will see perfbench in ICP catalog UI

Deploy RDO Pike on CentOS 7.4/ppc64le

  • System preparation

  • $ sudo systemctl disable firewalld
    $ sudo systemctl stop firewalld
    $ sudo systemctl disable NetworkManager
    $ sudo systemctl stop NetworkManager
    $ sudo systemctl enable network
    $ sudo systemctl start network

    Prepare the repo

    $ sudo yum install -y epel-release

And you need to add the following into your repo:

[root@host-9-114-111-215 centos]# cat /etc/yum.repos.d/rdo.repo 

[rdo-pike]

name=RDO Pike

baseurl=http://mirror.centos.org/centos/7/cloud/x86_64/openstack-pike

enabled=1

gpgcheck=0

[rdo-pike-dependency]

name=RDO Pike Dependency for PowerLinux

baseurl=ftp://ftp.unicamp.br/pub/ppc64el/centos/7_4/openstack/pike/

enabled=1

gpgcheck=0

After that, follow the standard RDO installation practices.

$ sudo yum repolist

$sudo yum update -y

$sudo yum install -y openstack-packstack

  • RDO installation via Packstack

    $packstack --allinone --default-password=<your passcode> --os-ceilometer-install=n  --os-aodh-install=n --os-gnocchi-install=n

 

 

 

Run CentOS 7.2/ppc64le as a standard KVM host

By default, CentOS 7.x has the KVM virtualization support in kernel. However, there are a few user-space packages missing.

Assuming you have a CentOS 7.2/ppc64le installed on a PowerLinux server already. You first need to configure a new repo to add in the missing user-space packages:

$ sudo vi /etc/yum.repos.d/virt.repo

[virt]

name=CentOS/RHEL-$releasever - Virt

baseurl=ftp://ftp.unicamp.br/pub/ppc64el/centos/7_2/virt/

gpgcheck=0

enabled=1
[openstack-mitaka-dependency]

name=OpenStack Mitaka Dependency Repository

baseurl=ftp://ftp.unicamp.br/pub/ppc64el/centos/7_2/openstack/mitaka/

gpgcheck=0

enabled=1

$ sudo yum repolist

$ sudo yum install gperf-tools-lib qemu-img-ev-2.3.0 qemu-kvm-common-ev qemu-kvm-ev -y

Then we need to play a trick by softlinking /usr/libexec/qemu-kvm to /usr/bin/qemu-system-ppc64, otherwise virsh will fail on certain commands.

$ sudo ln -s /usr/libexec/qemu-kvm /usr/bin/qemu-system-ppc64

Next, in case you have a VM image and VM configuration XML dump that were created on PowerKVM 3.x, you need to manually modify XML configuration:

<type arch=’ppc64le’ machine=’pseries-rhel7.2.0‘>hvm</type>

After completing the above steps, you should be able to use familiar CLIs such as virsh to manage your KVM virtualization environment the same way as on x86.

Run OpenStack Control Plane on PowerLinux

Rre-requites

Assuming you have CentOS 7.2 or RHEL 7.2 installed on a IBM Power8 or Open Power server.

Repository Configuration

  • RHEL
    • Configure RHEL with RedHat OS repo and Optional repo. You need a valid RedHat subscription for this to work.
  • CentOS
    • Just base repo shall be fine. No EPEL repo is needed.

In addition, the following repositories should be added:

[openstack-mitaka]
name=OpenStack Mitaka Repository
baseurl=http://mirror.centos.org/centos/7/cloud/x86_64/openstack-mitaka/
gpgcheck=0
enabled=1

The above is the standard RDO Mitaka repository for CentOS/RHEL. We will reuse the non-arch python packages in this repository.

[virt]
name=CentOS/RHEL-$releasever - Virt 
baseurl=ftp://ftp.unicamp.br/pub/ppc64el/centos/7_2/virt/
gpgcheck=0
enabled=1

[openstack-mitaka-dependency]
name=OpenStack Mitaka Dependency Repository
baseurl=ftp://ftp.unicamp.br/pub/ppc64el/centos/7_2/openstack/mitaka/
gpgcheck=0
enabled=1

The above two repositories contain the dependencies packages for ppc64le that enables the KVM virtualization management as well as other OpenStack services.

Installation

Follow the standard PackStack based installation procedures from step 2

 

 

Manage CentOS 7.2/ppc64le with OpenStack

Prerequisites

Assuming you have a Power server with CentOS 7.2 installed, and you have setup a RDO cloud controller on CentOS/RHEL 7.x on x86 with Packstack already.

Steps

1) Prepare the Power compute node

  • Modify the RDO repository file ( /etc/yum.repos.d/rdo-release.repo )

Change baseurl to

http://mirror.centos.org/centos/7/cloud/x86_64/openstack-mitaka/

We will reuse the RDO no-arch packages on ppc64le.

  • Add the OpenStack dependencies repo for ppc64le

       #vi /etc/yum.repos.d/rdo-deps.repo

[virt]
name=CentOS-$releasever – Virt
baseurl=ftp://ftp.unicamp.br/pub/ppc64el/centos/7_2/virt/
gpgcheck=0
enabled=1

[openstack-mitaka-dependency]
name=OpenStack Mitaka Dependency Repository
baseurl=ftp://ftp.unicamp.br/pub/ppc64el/centos/7_2/openstack/mitaka/
gpgcheck=0
enabled=1

  • Update the system

#yum update -y

2) Make changes on RDO Controller on x86

  • Edit the Packstack answer file to add in the target Power node:

# List the servers on which to install the Compute service.
CONFIG_COMPUTE_HOSTS=…, <ip of the new Power host>

# Comma-separated list of servers to be excluded from the
# installation. This is helpful if you are running Packstack a second
# time with the same answer file and do not want Packstack to
# overwrite these server’s configurations. Leave empty if you do not
# need to exclude any servers.
EXCLUDE_SERVERS=<all the existing nodes in the cloud topology>

For example, if you have <controller> and <node1>, <node2> in your current cloud, now you want to add a new Power node into this RDO topology. So the packstack answer file should be changed as below:

CONFIG_COMPUTE_HOSTS=<node1>,<node2>,<node3>

EXCLUDE_SERVERS=<controller>,<node1>,<node2>

So that when running Packstack, it will only install compute service on node 3.

  • Run packstack to start the deployment.

packstack –answer-file=<path-to-your-answerfile>

It will take about 10 minutes for the deployment to complete.

3) Verify that the Power host has been added into the existing RDO cloud

From Horizon dashboard, you can view the status of this newly added compute host.

Other scenarios

If your existing cloud is not RDO cloud or not deployed with Packstack, you will not be able to deploy the compute service to the Power host with Packstack. However, you can still follow the community OpenStack installation guide to install compute services and neutron networking agents on your Power host after you finish step 1).

 

How to fix the autoboot issue with PowerLinux

The Problem

I have configured petitboot to boot PowerKVM 3.1 installed on /dev/sda2, but even after rebooting, it will stays on the petitboot menu. I have to manually select the boot entry to boot into PowerKVM.

The environment

System type: 8284-22A     FW830.00 (SV830_023)

Petitboot Logs

cat /var/log/petitboot/pb-discover.log
--- pb-discover ---
Detected platform type: powerpc
Running command:
exe:  nvram
argv: 'nvram' '--print-config' '--partition' 'common'
configuration:
autoboot: enabled, 3 sec
network configuration:
  interface 6c:ae:8b:6a:74:14
   static:
    ip:  9.114.219.134/22
    gw:  9.114.219.254
  dns server 9.114.219.1
  boot device 070a7d69-b69d-4870-851f-3956ac94e41a
boot priority order:
    network: 0
        any: 1
  IPMI boot device 0x01 (persistent)
language: en_US.utf8
...

Root Cause

So the root cause is the boot configuration conflict. Someone has set it to boot from a network via IPMI command. And unfortunately when i configure it to boot from /dev/sda2, petitboot did not give me a warning. (This has been fixed in FW840)

Run the following ipmi command will prove the above analysis:

Get System Boot Options- NetFn = Chassis (0x00h), CMD = 0x09h

Response Data
0x00: No override
0x04: Force PXE
0x08: Force boot from default Hard-drive
0x14: Force boot from default CD/DVD
0x18: Force boot into BIOS setup

# ipmitool -I lanplus -H <fsp_ip> -P <passwd> raw 0x00 0x09 0x05 0x00 0x00

 01 05 c0 04 00 00 00

Here 0x04 indicates a boot override (Force PXE boot) . That is why it won’t auto boot to /dev/sda2.

Or we can use the following IPMI command for simplicity:

# ipmitool -I lanplus -H <fsp_ip> -P <passwd> chassis bootparam get 0x05
Boot parameter version: 1
Boot parameter 5 is valid/unlocked
Boot parameter data: c004000000
 Boot Flags :
 - Boot Flag Valid
 - Options apply to all future boots
 - BIOS PC Compatible (legacy) boot
 - Boot Device Selector : Force PXE
 - Console Redirection control : System Default
 - BIOS verbosity : Console redirection occurs per BIOS configuration setting (default)
 - BIOS Mux Control Override : BIOS uses recommended setting of the mux at the end of POST

The Fix

Set IPMI Boot Device to none:

# ipmitool -I lanplus -H <fsp_ip> -P <passwd> chassis bootdev none

After that, rebooting PowerLinux server will boot into /dev/sda2 automatically.

Update firmware to FW840

If the fw is updated to FW840, it will give you a warning on petitboot when you configure auto boot.

We can follow this link to update PowerLinux FW.

Reference

https://computercheese.blogspot.com/2013/04/ipmi-chassis-device-commands.html?view=sidebar

 

Deploy kubernetes on CentOS 7.x/ppc64le with Flannel

If we want to deploy a multi-node kubernetes on CentOS 7.x with flannel on x86, then we can follow the steps in this blog.

Below are the steps to add a CentOS 7.x/ppc64le as a kubernetes node.

Installation Steps

1) Configure yum repo for the required packages

sudo vi /etc/yum.repos.d/docker.repo
[docker]
name=Docker
baseurl=http://ftp.unicamp.br/pub/ppc64el/rhel/7_1/docker-ppc64el/
enabled=1
gpgcheck=0

sudo vi /etc/yum.repos.d/docker-misc.repo
[docker-misc]
name=Docker
baseurl=http://ftp.unicamp.br/pub/ppc64el/rhel/7_1/misc_ppc64el/
enabled=1
gpgcheck=0

sudo vi /etc/yum.repos.d/at.repo
[at]
name=IBM Advanced Toolchain
baseurl=ftp://ftp.unicamp.br/pub/linuxpatch/toolchain/at/redhat/RHEL7/
enabled=1
gpgcheck=0

2) Update the system and install ntpd as well

sudo yum update -y
sudo yum install ntp -y
sudo systemctl start ntpd

3) Install docker

sudo yum install docker-io -y

sudo mkdir -p /etc/systemd/system/docker.service.d
sudo vi /etc/systemd/system/docker.service.d/override.conf
[Service] 
ExecStart= 
ExecStart=/usr/bin/docker daemon --storage-driver=overlay $DOCKER_NETWORK_OPTIONS -H fd://

sudo systemctl daemon-reload
sudo systemctl enable docker

4) Install flannel

sudo yum install advance-toolchain-at9.0-devel -y
sudo yum install flannel -y

sudo vi /etc/sysconfig/flanneld
 FLANNEL_ETCD="http://<k8s-master>:2379"

sudo systemctl enable flanneld
sudo systemctl start flanneld
sudo systemctl start docker

Here we must install advance-toolchain explicitly, otherwise flanneld won’t start.

5) Install kubernetes node

sudo yum install kubernetes-node -y
sudo vi /etc/kubernetes/config
 KUBE_MASTER="--master=http://<k8s-master>:8080"

sudo vi /etc/kubernetes/kubelet

# You may leave this blank to use the actual hostname
 KUBELET_HOSTNAME=""

# location of the api-server
 KUBELET_API_SERVERS="--api_servers=http://<k8s-master>:8080"

6) Start and enabled all the services.

for SERVICES in kube-proxy kubelet; do
    sudo systemctl restart $SERVICES
    sudo systemctl enable $SERVICES
done

Verify that kubernetes node is correctly installed on CentOS 7.x/ppc64le

From your kubernetes master node, run

kubectl get nodes

And check that your centos 7.x/ppc64le host is listed in Ready status.

Sample output:

[centos@mengxd-master ~]$ kubectl get nodes
NAME           STATUS     AGE
mengxd-node1   Ready      13d
mengxd-node2   Ready      43s

 

References

[1] http://sudhaker.com/41/multi-node-kubernetes-on-centos-7-x-with-flannel