Install LBaaSv2 with RDO Pike

Install the packages

yum install openstack-neutron-lbaas openstack-neutron-lbaas-ui haproxy -y

Add the LBaaS v2 service plug-in to the service_plugins configuration directive in /etc/neutron/neutron.conf. The plug-in list is comma-separated:

service_plugins = [existing service plugins],neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2

Add the LBaaS v2 service provider to the service_provider configuration directive within the [service_providers]section in /etc/neutron/neutron_lbaas.conf

LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default

Select the driver that manages virtual interfaces in /etc/neutron/lbaas_agent.ini

[DEFAULT]
device_driver = neutron_lbaas.drivers.haproxy.namespace_driver.HaproxyNSDriver
interface_driver = INTERFACE_DRIVER
[haproxy]
user_group = haproxy

Replace INTERFACE_DRIVER with the interface driver that the layer-2 agent in your environment uses. For example, openvswitch for Open vSwitch or linuxbridge for Linux bridge (neutron.agent.linux.interface.BridgeInterfaceDriver).

Run the neutron-lbaas database migration:

neutron-db-manage --subproject neutron-lbaas upgrade head

In /etc/neutron/services_lbaas.conf, add the following to the [haproxy]section:

user_group = haproxy

Comment out any other device driver entries.

systemctl enable neutron-lbaasv2-agent.service
systemctl start neutron-lbaasv2-agent.service
systemctl status neutron-lbaasv2-agent.service
systemctl restart neutron-server.service
systemctl status neutron-server.service
neutron agent-list

neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.

+--------------------------------------+----------------------+-----------+-------------------+-------+----------------+---------------------------+

| id                                   | agent_type           | host      | availability_zone | alive | admin_state_up | binary                    |

+--------------------------------------+----------------------+-----------+-------------------+-------+----------------+---------------------------+

| 2f89da1c-9633-47b5-936c-3c195b4c278a | Linux bridge agent   | pkvmci858 |                   | 🙂   | True           | neutron-linuxbridge-agent |

| bf37205b-c304-4e87-b375-da953b06853b | Metadata agent       | pkvmci858 |                   | 🙂   | True           | neutron-metadata-agent    |

| c4ecb0ca-03e1-404b-8307-b5479a802736 | L3 agent             | pkvmci858 | nova              | 🙂   | True           | neutron-l3-agent          |

| dda9b8f4-bbb7-49b8-a536-fd8f356057fd | Loadbalancerv2 agent | pkvmci858 |                   | 🙂   | True           | neutron-lbaasv2-agent     |

| f6f9b5d9-b9b0-462d-acac-484743619ebf | DHCP agent           | pkvmci858 | nova              | 🙂   | True           | neutron-dhcp-agent        |

+--------------------------------------+----------------------+-----------+-------------------+-------+----------------+---------------------------+

Modify OpenStack Dashboard to enable LBaaSv2

vi /etc/openstack-dashboard/local_settings

OPENSTACK_NEUTRON_NETWORK = {

    'enable_router': True,

    'enable_quotas': False,

    'enable_ipv6': False,

    'enable_distributed_router': True,

    'enable_ha_router': True,

    'enable_fip_topology_check': False,

    'enable_lb': True,
#Restart HTTPD and Memcached to activate the LBaaSv2 support
systemctl restart httpd memcached

 An example usage

neutron lbaas-loadbalancer-create --name test-lb selfservice
#"selfservice" is the subnet name.
neutron lbaas-loadbalancer-show test-lb
#In my example, it will show 172.16.1.10 has been used as VIP.
neutron security-group-create lbaas
neutron security-group-rule-create \
  --direction ingress \
  --protocol tcp \
  --port-range-min 80 \
  --port-range-max 80 \
  --remote-ip-prefix 0.0.0.0/0 \
  lbaas
neutron security-group-rule-create \
  --direction ingress \
  --protocol tcp \
  --port-range-min 443 \
  --port-range-max 443 \
  --remote-ip-prefix 0.0.0.0/0 \
  lbaas
neutron security-group-rule-create \
  --direction ingress \
  --protocol icmp \
  lbaas
neutron port-update \
  --security-group lbaas \
  9f8f8a75-a731-4a34-b622-864907e1d556
#Here 9f8f8a75-a731-4a34-b622-864907e1d556 is vip_port_id from lbaas-loadbalancer-show

Adding an HTTP listener

neutron lbaas-listener-create \
  --name test-lb-http \
  --loadbalancer test-lb \
  --protocol HTTP \
  --protocol-port 80
neutron lbaas-pool-create \ --name test-lb-pool-http \ --lb-algorithm ROUND_ROBIN \ --listener test-lb-http \ --protocol HTTP
#Create two VMs each running a HTTP service on port 80.
#Assuming the two VMs having 172.16.1.4 and 172.16.1.10 as VM IP.
neutron lbaas-member-create   --name test-lb-http-member-1   --subnet selfservice   --address172.16.1.4   --protocol-port 80   test-lb-pool-http
neutron lbaas-member-create   --name test-lb-http-member-2   --subnet selfservice   --address 172.16.1.10   --protocol-port 80   test-lb-pool-http

Now, you shall be able to access HTTP service from VIP. (You can play around by stopping the member VMs to examine the behavior).

If you run “sudo ps aux |grep haproxy” you will notice there is a process like below:

nobody   58292  0.0  0.0  13120  5248 ?        Ss   05:45   0:00 haproxy -f /var/lib/neutron/lbaas/v2/62c54dc1-f32e-4775-9a33-e606cc51163f/haproxy.conf -p /var/lib/neutron/lbaas/v2/62c54dc1-f32e-4775-9a33-e606cc51163f/haproxy.pid -sf 53978

And /var/lib/neutron/lbaas/v2/62c54dc1-f32e-4775-9a33-e606cc51163f/haproxy.conf contains the haproxy configuration for the above HTTP loadbalancer:

cat /var/lib/neutron/lbaas/v2/62c54dc1-f32e-4775-9a33-e606cc51163f/haproxy.conf

# Configuration for test-lb

global

    daemon

    user nobody

    group haproxy

    log /dev/log local0

    log /dev/log local1 notice

    maxconn 2000

    stats socket /var/lib/neutron/lbaas/v2/62c54dc1-f32e-4775-9a33-e606cc51163f/haproxy_stats.sock mode 0666 level user

defaults

    log global

    retries 3

    option redispatch

    timeout connect 5000

    timeout client 50000

    timeout server 50000

frontend c0e5371e-c304-4cf6-af56-532b7a7ae212

    option tcplog

    option forwardfor

    bind 172.16.1.8:80

    mode http

    default_backend 1e0b0659-dcb5-415a-9115-501aa7b545fa

backend 1e0b0659-dcb5-415a-9115-501aa7b545fa

    mode http

    balance roundrobin

    server bec52ec4-b09c-4610-8476-b773bc22a007 172.16.1.10:80 weight 1

    server 5d0b9602-012d-4407-ad37-d155c4f9ecbf 172.16.1.4:80 weight 1

You can add a health monitor so that unresponsive servers are removed from the pool:

neutron lbaas-healthmonitor-create \
  --name test-lb-http-monitor \
  --delay 5 \
  --max-retries 2 \
  --timeout 10 \
  --type HTTP \
  --pool test-lb-pool-http

Notice the haproxy configuration file changes after applying the health monitor

cat /var/lib/neutron/lbaas/v2/62c54dc1-f32e-4775-9a33-e606cc51163f/haproxy.conf

# Configuration for test-lb

global

    daemon

    user nobody

    group haproxy

    log /dev/log local0

    log /dev/log local1 notice

    maxconn 2000

    stats socket /var/lib/neutron/lbaas/v2/62c54dc1-f32e-4775-9a33-e606cc51163f/haproxy_stats.sock mode 0666 level user

defaults

    log global

    retries 3

    option redispatch

    timeout connect 5000

    timeout client 50000

    timeout server 50000

frontend c0e5371e-c304-4cf6-af56-532b7a7ae212

    option tcplog

    option forwardfor

    bind 172.16.1.8:80

    mode http

    default_backend 1e0b0659-dcb5-415a-9115-501aa7b545fa

backend 1e0b0659-dcb5-415a-9115-501aa7b545fa

    mode http

    balance roundrobin

    timeout check 10s

    option httpchk GET /

    http-check expect rstatus 200

    server bec52ec4-b09c-4610-8476-b773bc22a007 172.16.1.10:80 weight 1 check inter 5s fall 2

    server 5d0b9602-012d-4407-ad37-d155c4f9ecbf 172.16.1.4:80 weight 1 check inter 5s fall 2
Advertisements

How to remove a neutron network

[root@ci858 ~]# neutron net-delete selfservice

neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.

Unable to complete operation on network d3a504ea-700a-4d95-b57d-b626d7c1678f. There are one or more ports still in use on the network.

Neutron server returns request_ids: ['req-eac5dbeb-5775-4b51-9603-67f4b650c879']


[root@ci858 ~]# neutron subnet-delete selfservice

neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.

Unable to complete operation on subnet d94e928b-e959-495b-8a9d-aa5babd91015: One or more ports have an IP allocation from this subnet.

Neutron server returns request_ids: ['req-8b394cb9-1efa-43ad-a06c-f37da1ae4d65']
[root@ci858 ~]# neutron router-delete router

neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.

Router 71a2319c-d82d-4ceb-aac3-6d42c3ff77cd still has ports

Neutron server returns request_ids: ['req-f6c37d9a-bb8a-4f68-a658-2b3ba528ed03']

We just cannot easily remove a network from Horizon or CLI. Below is the correct sequence we have to follow:

#Make sure no VMs connected to this network first!!!
#You can run neutron port-list/port-delete to delete
#the attached ports first. 

#neutron router-gateway-clear <Router>

    (check name with neutron router-list)

#neutron router-interface-delete <Router> <SubNet>

#neutron router-delete <Router>

#neutron subnet-delete <SubNet>

#neutron net-delete <Net>

Docker configuration on CentOS

Configure log-driver

  • sudo vi /etc/sysconfig/docker

Configure current user to run docker without sudo

  • sudo groupadd docker
    sudo usermod -aG docker $USER
    sudo systemctl enable docker.service
    sudo systemctl restart docker.service
    Then exit current shell and login with current user again

Related configuration files

  • /etc/docker/daemon.json
    /etc/sysconfig/docker
    /etc/systemd/system/multi-user.target.wants/docker.service
    

Creating a helm chart in ICP

Recently I am doing some performance test with container clusters. And I am using ICP 2.1.0.3 as my environment.  I have recorded the steps below:

  1. Prepare a docker image that has perf installed.
    1. On a test VM I created a docker instance from centos7 base image and installed with “sudo yum install perf gawk -y” on top.
    2. Then i commit it as a new docker image.
    3. Finally I exported this new docker image as perf.tar file.
  2.  Transfer the perf.tar to the ICP master node and import it into local docker repo.
    1. sudo bx pr login -a https://<Your-ICP-Cluster>:8443 --skip-ssl-validation
    2. sudo docker login mycluster.icp:8500
    3. 
      
      sudo docker import perf.tar ppc64le/perf:v1
    4. 
      
      sudo docker tag ppc64le/perf:v1 mycluster.icp:8500/default/perfbench:v1
    5. 
      
      sudo docker push mycluster.icp:8500/default/perfbench:v1
    6. 
      
      sudo kubectl -n default edit image perfbench
          Modify scope to global so that it can be deployed to other namespaces.
  3. Now we will create a new helm chart
    1. sudo helm create perfbench
      1. modify value.yaml and template/deployment.yaml to match your docker image properties.
    2. Package it into a helm chart
      1. Use sudo helm lint perfbench to validate if there is any syntax errors
      2. sudo helm package perfbench
      3. And the result will be perfbench-0.1.0.tgz
    3. Load it into your k8s cluster
      1. sudo bx pr load-helm-chart --archive perfbench-0.1.0.tgz
  4. Now we will see perfbench in ICP catalog UI