Building on my previous post on installing Rackspace Private Cloud in a two node configuration, this update changes that slightly to allow for external network access and floating IPs to work with this deployment. My previous post used all VXLAN interfaces from a single physical NIC. That would not allow external/floating IPs to work, or at least I couldn’t get it to work. Since I have two physical NICs on these servers this guide will use both - em1 for br-mgmt, br-vxlan, br-storage via VXLAN interfaces since I don’t want to use VLANs, and p4p1 for br-vlan, which has direct access to my external network, 192.168.1.0/24.

Keep in mind:

  • Official Rackspace Private Cloud install guide can be found here
  • My guide just goes over some that process with me using openstack-ansible and only 2 physical nodes
  • To perform this minimal configuration with no VLANs it uses VXLAN interfaces
  • This is just for testing, learning how to deploy with openstack-ansible and operating OpenStack with LXC containers

##Networking

/etc/network/interfaces - gist

/etc/network/interfaces.d/openstack-interfaces.cfg - gist

Just to document everything that actually worked for me I’m including both files, interfaces and openstack-interfaces.cgf, but obviously edit as needed. I setup both of these on my infrastructure and compute physical nodes. The VXLAN interfaces use em1. br-vlan uses p4p1. Change the br-mgmt, br-vxlan, br-storage IPs for your different nodes. Notice p4p1 and br-vlan don’t have IPs.

infra1:

compute1:

##Installing

See previous post on setting up target hosts for the packages and dependencies required. Now we need to grab the RPC installation playbooks and configs. I used the RPC v10 Juno branch in this setup, v10 is still in development so you can also use the better tested icehouse branch.

/etc/rpc_deploy/user_variables.yml is the file containing OpenStack and infrastructure service usernames, passwords and other variables you can set. There’s a handy Python script you can run to auto generate these passwords.

If your compute node supports KVM, I would recommend adding nova_virt_type: kvm in user_variables.yml.

/etc/rpc_deploy/rpc_user_variables.yml is the config file where you define your OpenStack networks, IPs and which physical hosts will receive which OpenStack service. Gist of my rpc_user_variables.yml for reference. Since we won’t be using a physical load balancer we add an HAProxy host. You’ll notice infra1 refers to my infrastructure hosts br-mgmt network, compute1 to my compute nodes br-mgmt, and I’m actually using my compute node for cinder too, again, flexibility. I’m also reusing the infra1 node for the log, network and haproxy hosts since we don’t have dedicated servers for those tasks. Be sure to set external_lb_vip_address as the externally accessible IP.

Now we install. These can take awhile depending on your systems, the OpenStack playbooks take the longest. Forks is set to 15 in /opt/os-ansible-deployment/rpc_deployment/ansible.cfg, bump that up to 25 or higher if you prefer. Change directory to /opt/os-ansible-deployment/rpc_deployment. First we run the host-setup.yml playbook, followed by the haproxy-install.yml, infrastructure-setup.yml and then the openstack-setup.yml playbook.

Hopefully each of these completes with zero items unreachable or failed. Now we run the HAProxy playbook and so on.

Verify the infrastructure playbook ran successfully. At this point you should be able to hit your Kibana interface at https://IP:8443. Isn’t that an awesome Kibana dashboard?

kibana_small

Now we run the main OpenStack playbooks, these can take some time to complete.

Verify OpenStack operation. That’s it! You should have a two node RPC installation. The Horizon dashboard should be reachable at your servers external IP over HTTPS.

##Neutron setup and floating IPs

The utility container is a helpful container with tools like the OpenStack clients installed. When interacting with the deployment you probably want to attach to the utility container and peform actions.

Here’s what I did to setup two networks, one private for instance to instance traffic, and the other external for floating IPs. Our install should have put a openrc file with our ‘admin’ credentials in the utility container at /root. With this simple 2-node setup, don’t get caught up with the bridge names. The br-vlan is what we use for our external network even though we are not using VLANs. I think you can edit these names in the config but I will need to test.

Now you can add a floating IP to your tenant and assign it to an instance. You can do that from Horizon or the command line from the utility container. Use neutron floatingip-list, floatingip-create, port-list (or just use Horizon) to get the info you need. neutron floatingip-associate uses FLOATINGIP_ID PORT.

Woot! Next up, adding a Swift node! If you’re having any issues reaching the floating IP take a look at your security group settings and open ICMP, SSH as needed.