January 10, 2015 · openstack-ansible openstack

Two node RPC v9 installation

Rackspace Private Cloud powered by OpenStack was recently re-architected to be much more flexible and reliable in RPC v9 (Icehouse) and the soon to be released RPC v10 (Juno). It actually deploys OpenStack in LXC containers on your hosts. At first, you might think this adds a layer of complexity in an already complex process, but I've found it actually provides a tremendous amount of flexibility and an easier upgrade path for your OpenStack installation. Using this deployment method you should only have to edit two Ansible configuration files, so the process is not all that difficult and makes installing OpenStack simpler.

The reference architecture for RPC v9 is designed to be more robust and scalable than previous versions with three infrastructure/controller nodes, physical redundant firewalls and load balancers. I don't have that kind of gear in my closet to test on, and don't want to use VLANs to segment this traffic, so this guide will be covering how to use Rackspace Private Cloud software on a two node (1 infrastructure node, 1 compute node) installation with 1 NIC.

Keep in mind:

My gear

Infrastructure node:

Lenovo ThinkServer TS140
Xeon E3-1225v3 3.2 GHz
8GB ECC RAM
2 NICs (only using 1 for OpenStack)

Compute node:

Dell Poweredge T110II
Xeon E3-1230v2 3.3 GHz
32 GB ECC RAM
2 NICs (only using 1 for OpenStack)

Network: 192.168.1.0/24, we will be creating VXLAN interfaces/bridges for our nodes to talk.

Deployment Hosts

You can have a dedicated server just for Ansible and the configuration files that are used to deploy OpenStack to your nodes, but I want to keep this as simple as possible so we will be combining our infrastructure and deployment node.

Target Hosts

Target hosts are your infrastructure, compute, cinder, swift, etc physical hosts. OpenStack and infrastructure services will be installed inside containers on these hosts. Let's prepare our infrastructure and compute nodes.

Infrastructure/Deployment node

OS: Ubuntu Server 14.04 LTS
NIC: 192.168.1.50

root@infra1:~# apt-get update; apt-get upgrade -y
root@infra1:~# apt-get install -y aptitude build-essential git ntp ntpdate \
openssh-server python-dev sudo bridge-utils debootstrap ifenslave lsof lvm2 \
tcpdump vlan

Add the following to your /etc/network/interfaces file source /etc/network/interfaces.d/*.cfg.

root@infra1:~# vim /etc/network/interfaces.d/openstack-interfaces.cfg

Copy the contents from this gist, it contains the interfaces and bridges to setup. Feel free to change dev em1 for the vxlan- interfaces depending on your physical NIC device name. I'd recommend leaving the IPs as is since the RPC config file is setup to use the 172.29 networks.

Give the server a reboot and confirm the bridges come up with their IPs.

IP address for em1:        192.168.1.50
IP address for br-mgmt:    172.29.236.10
IP address for br-vxlan:   172.29.240.10
IP address for br-storage: 172.29.244.10

Compute node

OS: Ubuntu Server 14.04 LTS
NIC: 192.168.1.55

Since this is just a target host, we install a few less packages.

root@compute1:~# apt-get update; apt-get upgrade -y
root@compute1:~# apt-get install -y bridge-utils debootstrap ifenslave lsof \
lvm2 ntp ntpdate openssh-server sudo tcpdump vlan

Follow the same network setup as above, but change the br-mgmt address to 172.29.236.11, br-vxlan to 172.29.240.11, br-storage to 172.29.244.11. Compute node gist if needed.

Reboot the compute node.

IP address for em1:        192.168.1.55
IP address for br-mgmt:    172.29.236.11
IP address for br-vxlan:   172.29.240.11
IP address for br-storage: 172.29.244.11

To confirm your nodes can talk over the VXLAN/br-mgmt 172.29 networks, login to your infrastructure node and ping 172.29.236.11 which should be the compute node. From the compute node ping 172.29.236.10.

Installation

From your infrastructure/deployment node create an SSH key and copy the public SSH key to all your target hosts /root/.ssh/authorized_keys file. Confirm you can login to the nodes with root from the infrastructure/deployment node.

Now we need to grab the RPC installation playbooks and configs. I used the RPC v10 Juno branch in this setup, v10 is still in development so you can also use the better tested icehouse branch.

root@infra1:~# cd /opt
root@infra1:/opt# git clone -b juno https://github.com/stackforge/os-ansible-deployment.git
root@infra1:/opt# curl -O https://bootstrap.pypa.io/get-pip.py
root@infra1:/opt# python get-pip.py
root@infra1:/opt# pip install -r /opt/os-ansible-deployment/requirements.txt
root@infra1:/opt# cp -R /opt/os-ansible-deployment/etc/rpc_deploy /etc

That last command copied the main configuration files directory, rpc_deploy, to /etc. So going forward when you want to edit your rpc_user_variables.yml and user_variables.yml files, edit them from /etc/rpc_deploy/.

/etc/rpc_deploy/user_variables.yml is the file containing OpenStack and infrastructure service usernames, passwords and other variables you can set. There's a handy Python script you can run to auto generate these passwords.

root@infra1:~# cd /opt/os-ansible-deployment/scripts
root@infra1:/opt/os-ansible-deployment/scripts# ./pw-token-gen.py --file /etc/rpc_deploy/user_variables.yml

If your compute node supports KVM, I would recommend adding nova_virt_type: kvm in user_variables.yml.

/etc/rpc_deploy/rpc_user_variables.yml is the config file where you define your OpenStack networks, IPs and which physical hosts will receive which OpenStack service. Gist of my rpc_user_variables.yml for reference. Since we won't be using a physical load balancer we add an HAProxy host. You'll notice infra1 refers to my infrastructure hosts br-mgmt network, compute1 to my compute nodes br-mgmt, and I'm actually using my compute node for cinder too, again, flexibility. I'm also reusing the infra1 node for the log, network and haproxy hosts since we don't have dedicated servers for those tasks. Be sure to set external_lb_vip_address as the externally accessible IP, 192.168.1.50 in this setup.

Now we install. These can take awhile depending on your systems, the OpenStack playbooks take the longest. Forks is set to 15 in /opt/os-ansible-deployment/rpc_deployment/ansible.cfg, bump that up to 25 or higher if you prefer. Change directory to /opt/os-ansible-deployment/rpc_deployment. First we run the host-setup.yml playbook, followed by the haproxy-install.yml, infrastructure-setup.yml and then the openstack-setup.yml playbook.

root@infra1:/opt/os-ansible-deployment/rpc_deployment# ansible-playbook -e @/etc/rpc_deploy/user_variables.yml \
 playbooks/setup/host-setup.yml

Hopefully each of these completes with zero items unreachable or failed. Now we run the HAProxy playbook and so on.

root@infra1:/opt/os-ansible-deployment/rpc_deployment# ansible-playbook -e @/etc/rpc_deploy/user_variables.yml \
 playbooks/infrastructure/haproxy-install.yml

root@infra1:/opt/os-ansible-deployment/rpc_deployment# ansible-playbook -e @/etc/rpc_deploy/user_variables.yml \
 playbooks/infrastructure/infrastructure-setup.yml

Verify the infrastructure playbook ran successfully. At this point you should be able to hit your Kibana interface at https://192.168.1.50:8443. Isn't that an awesome Kibana dashboard?

kibana_small

Now we run the main OpenStack playbooks, this usually takes the longest.

root@infra1:/opt/os-ansible-deployment/rpc_deployment# ansible-playbook -e @/etc/rpc_deploy/user_variables.yml \
 playbooks/openstack/openstack-setup.yml

Verify OpenStack operation. That's it! You should have a two node RPC installation. The Horizon dashboard should be reachable at https://192.168.1.50/. You can also attach to the utility container and use the OpenStack clients to configure your Private Cloud.

root@infra1:~# lxc-ls | grep util
infra1_utility_container-9465f12e
root@infra1:~# lxc-attach -n infra1_utility_container-9465f12e
root@infra1_utility_container-9465f12e:~#
root@infra1_utility_container-9465f12e:~# . openrc
root@infra1_utility_container-9465f12e:~# nova list
+--------------------------------------+-------+--------+------------+-------------+---------------------------------+
| ID                                   | Name  | Status | Task State | Power State | Networks                        |
+--------------------------------------+-------+--------+------------+-------------+---------------------------------+
| ffa57e22-4335-449d-91bd-9397702fe1a8 | test1 | ACTIVE | -          | Running     | private=10.0.0.3, 192.168.1.221 |
+--------------------------------------+-------+--------+------------+-------------+---------------------------------+

I did run into a few random failures when running the playbooks, rerunning them usually fixed the issue. So before filing a bug, I'd recommend rerunning the playbook. I did come across times I just wanted to start over and blow away any containers already created. To clear out old containers and data I did the following on each node, I'm not 100% this is the proper way to do this.

Remove container IPs from /etc/hosts
for i in `lxc-ls`; do lxc-stop -n $i; done
for i in `lxc-ls`; do lxc-destroy -n $i; done
rm -rf /openstack
rm /etc/rpc_deploy/rpc_inventory.json
rm /etc/rpc_deploy/rpc_hostnames_ips.yml

The OpenStack APIs should be listening on your infra1 external IP, 192.168.1.50 in this example, so just point your clients to that IP and the required URL/port or just attach to the utility container.

Swift should have been added to the playbooks recently, but I haven't tested it out yet. This two-node setup isn't going to be the best for performance (everything going through 1 NIC), I haven't tested networking on instances to see how well it performs with the VXLAN and bridge interfaces. There's probably a better way to setup the interfaces and if you find one let me know. I still need to see if I can get floating IPs working and how to setup the routers for the provider network. Things I tested quickly were spinning up a few cirros instances, making sure they can talk and creating a Cinder volume. I've started writing an Ansible playbook to automate the initial setup of the target hosts, I hope to post that soon.

Moving OpenStack to LXC containers is a significant move for RPC, one that I believe opens up a great amount of flexibility. I look forward to learning and posting more about this method of installing OpenStack.

  • LinkedIn
  • Tumblr
  • Reddit
  • Google+
  • Pinterest
  • Pocket
Comments powered by Disqus