My post on using openstack-ansible to deploy OpenStack in LXC containers in a two-node configuration with two NICs. One NIC is used for management, API and VM to VM traffic, the other NIC is for external network access. This is just for testing and messing around with deploying OpenStack in containers. An advantage to deploying with Ansible and containers is the easier upgrade path it provides. I’ll show a simple example of that in place upgrade with going from Icehouse to Juno with just running a few playbooks.

###2 Nodes

infra1: Lenovo ThinkServer TS140 Xeon E3-1225 v3 3.2 GHz 8GB ECC RAM 2 x 1Gb NICs

compute1: Dell Poweredge T110II Xeon E3-1230 v2 3.3 GHz 32GB ECC RAM 2 x 1Gb NICs

Template /etc/network/interfaces and /etc/network/interfaces.d/openstack-interfaces.cfg

###Icehouse Install

We’ll be deploying from the infra1 node, you could also have a dedicated ‘deployment host’, but for simplicity I deploy from infra1. Now we grab the version of openstack-ansible that will deploy Icehouse.

Edit your /etc/rpc_deploy/user_variables.yml and /etc/rpc_deploy/rpc_user_config.yml files which describe the environment. The following script can be used to generate user_variables.yml for service passwords.

You can see in the rpc_user_variables.yml file how you could add nodes if you had a larger deployment. This method of deploying OpenStack is designed for large deployments with multiple compute, cinder and swift nodes. For example, you can add more compute hosts like below.

Template /etc/rpc_deploy/rpc_user_config.yml and /etc/rpc_deploy/user_variables.yml. In addition to the OpenStack service settings and passwords, the user_variables.yml file can be used to integrate two services (Glance, Monitoring as a Service) with your public Rackspace cloud services. This is completely optional, you can still use the filesystem/NFS/NetApp for the glance backend and you don’t have to install MaaS, but it gives you some additional options. For example, we can use Cloud Files which is based on OpenStack Swift for the Glance backend in our private cloud. We can also use the included MaaS playbooks to hook into the (free) Cloud Monitoring service provided by Rackspace to monitor all of our hosts/containers/API services using the Cloud Monitoring API. The following values need to be setup in user_variables.yml if you want to integrate the two services with your public cloud account. Change to your own public cloud account details and you can choose which data center and container to store your Glance images in Cloud Files.

Now comes running the playbooks and Ansible will deploy OpenStack. These are the playbooks and order to run. Since we’re not using a physical load balancer like an f5, we add the HAProxy playbook which will use our infra1 node as the load balancer for the environment.

  • playbooks/setup/host-setup.yml
  • playbooks/infrastructure/haproxy-install.yml
  • playbooks/infrastructure/infrastructure-setup.yml
  • playbooks/openstack/openstack-setup.yml

Hopefully each of these completes with zero items unreachable or failed. If you do receive a failure I usually rerun the playbook at least once and use -vvv to get more verbose output.

Verify the infrastructure playbook ran successfully. At this point you should be able to hit your Kibana interface at https://192.168.1.100:8443. The user is ‘kibana’ and the password can be found in ‘kibana_password’ in user_variables.yml.

kibana_icehouse

Now we run the main OpenStack playbooks, these can take some time to complete.

horizon

OpenStack should be installed!

###Monitoring as a Service

This is optional as far as deploying OpenStack is concerned, but it’s a really nice way to monitor your hosts, containers and services (for free). Just setup your public Rackspace cloud account details in user_variables.yml and then run these playbooks. The playbooks will setup the monitoring agent and setup the checks for all hosts/containers/services, tying them to the notification plan you can create from https://intelligence.rackspace.com. Click Notify and setup your notification plan. Then grab the notification ID, usually something like ‘nphpgsP4DM’ which you can see in the URL. Enter that in the maas_notification_plan section in user_variables.yml. The only other thing you need to do is create an entity using the hostname for each server. From https://intelligence.rackspace.com just click Create Entity. The MaaS playbook and the Cloud Monitoring API will automatically use the entities as long as they match your servers hostnames.

maas_1

maas_3

Did I mention Cloud Monitoring is free? :D

###OpenStack Networking

Note: Since we’re not using VLANs I know the naming can be a bit confusing, I need to look into changing the naming since I believe the rpc_user_config.yml allows that. Traffic to a VMs floating IP enters the infra/network node on p4p1/br-vlan and the compute node accepts the traffic on em1/br-vxlan, so it’s not truly segmented traffic. Fine for testing but I need to look into changing that up since there are two NICs on my servers.

The utility container is a helpful container with tools like the OpenStack clients installed. When interacting with the deployment you probably want to attach to the utility container and peform actions. I’d also recommend setting up an alias that quickly connects to the utility container.

Here’s what I did to setup two networks, one private for instance to instance traffic, and the other external for floating IPs. Our install should have put a file with our credentials at /root/openrc that we need to source to talk to our OpenStack installation.

Now you can add a floating IP to your tenant and assign it to an instance. You can do that from Horizon or the command line from the utility container. Use neutron floatingip-list, floatingip-create, port-list (or just use Horizon) to get the info you need. neutron floatingip-associate uses FLOATINGIP_ID PORT.

###In place upgrade to Juno

One of the main advantages to deploying with Ansible and inside LXC containers is the ability it gives you to perform in place upgrades. This is an extremely simplified example, of course, with only 2 nodes, and doesn’t neccessarily reflect the upgrade complications of larger deployments. It’s just a little demo, but still pretty impressive given the past issues with upgrading OpenStack.

First, let’s verify what code base we’re on.

We can verify the version from https://wiki.openstack.org/wiki/Releases, ‘2014.1.3’ corresponds to the Icehouse release from Oct 2, 2014.

Now all we have to do is download the openstack-ansible Juno bits and rerun the same playbooks. To check instances reaction to the upgrade I’ll ping to and from a running instance during the upgrade. We can also check the API response time from our Cloud Monitoring MaaS.

As for the config files, the only change I made was removing maas_repo_version from user_variables.yml and deleting the alarms for the RabbitMQ checks from https://intelligence.rackspace.com. We also need to copy the new rpc_enviroment.yml file into /etc/rpc_deploy/, for example, # cp /opt/os-ansible-deployment/etc/rpc_deploy/rpc_environment.yml /etc/rpc_deploy/. Additional options and documentation can be found here.

With the Juno openstack-ansible ready we run the following playbooks just like the Icehouse install.

Our monitoring system picked up some raises in API response time, but the only container that triggered an alarm was Horizon. It recovered on its own.

infra1 node CPU and RAM visuals during the upgrade.

maas_juno_infra1

Keystone API during upgrade.

maas_juno_keystone

Neutron API during upgrade.

maas_juno_neutron

Nova API during upgrade.

maas_juno_nova

Pinging to the running instance from my laptop during the upgrade.

Ping out from the instance to google.com during the upgrade.

And as you can see we’re now running ‘2014.2.1’ which is Juno, Dec 5, 2014 code.

That should be it. This was a very simple example of deploying and upgrading OpenStack into LXC containers using openstack-ansible, all done with editing a couple config files and runnning some playbooks, pretty awesome!

Icehouse docs: http://docs.rackspace.com/rpc/api/v9/bk-rpc-installation/content/rpc-common-front.html Juno docs: http://docs.rackspace.com/rpc/api/v10/bk-rpc-installation/content/rpc-common-front.html Upgrade docs: http://docs.rackspace.com/rpc/api/v10/bk-rpc-v10-releasenotes/content/rpc-common-front.html openstack-ansible: https://launchpad.net/openstack-ansiblehttps://github.com/stackforge/os-ansible-deployment Rackspace Private Cloud powered by OpenStack: http://www.rackspace.com/cloud/private/openstack