RPC v9 - Two node with floating IPs

Building on my previous post on installing Rackspace Private Cloud in a two node configuration, this update changes that slightly to allow for external network access and floating IPs to work with this deployment. My previous post used all VXLAN interfaces from a single physical NIC. That would not allow external/floating IPs to work, or at least I couldn’t get it to work. Since I have two physical NICs on these servers this guide will use both - em1 for br-mgmt, br-vxlan, br-storage via VXLAN interfaces since I don’t want to use VLANs, and p4p1 for br-vlan, which has direct access to my external network, 192.168.1.0/24. ...

February 7, 2015 · 5 min · Shane Cunningham

Two node RPC v9 installation

Rackspace Private Cloud powered by OpenStack was recently re-architected to be much more flexible and reliable in RPC v9 (Icehouse) and the soon to be released RPC v10 (Juno). It actually deploys OpenStack in LXC containers on your hosts. At first, you might think this adds a layer of complexity in an already complex process, but I’ve found it actually provides a tremendous amount of flexibility and an easier upgrade path for your OpenStack installation. Using this deployment method you should only have to edit two Ansible configuration files, so the process is not all that difficult and makes installing OpenStack simpler. ...

January 11, 2015 · 7 min · Shane Cunningham

OpenStack Juno All in One

Quick guide on setting up the OpenStack Juno release in an all in one server with one NIC using RDO. This is configured for Neutron networking with floating IPs. My setup: CentOS 7 minimal, IP: 192.168.1.100 By default NetworkManager will be running and controlling our NICs. packstack will complain about this later so disable and stop the service. [root@juno-allinone ~]# systemctl disable NetworkManager [root@juno-allinone ~]# systemctl stop NetworkManager Next we’ll update some stuff, install the RDO Juno repo and install packstack. ...

November 8, 2014 · 2 min · Shane Cunningham

Migrating Cinder volumes to Icehouse

I upgraded my all in one OpenStack Havana box to the new Icehouse release. All the same steps apply as in my all in one OpenStack deployment post except use http://rdo.fedorapeople.org/rdo-release.rpm which now redirects to the Icehouse RPM. I wanted to blow everything away and not perform an upgrade. The only issue I ran into was I boot my VMs with Cinder volumes for persistant storage (just use LVM to create a volume group named “cinder-volumes”) and I wanted to move those to Icehouse with the data untouched. ...

May 9, 2014 · 2 min · Shane Cunningham

My all in one OpenStack deployment at home

I use XenServer 6.2 as my hypervisor at home to run anywhere from 5-10 VMs. But I wanted to change up this setup and move to OpenStack Private Cloud deployment. Yes, it’s overkill for my use but oh well. I’ve messed around a few times with using OpenStack as replacement for my XenServer 6.2 setup, but always ran into an issue, usually getting the networking correct given my home network. Luckily with the OpenStack Havana release networking has become much simpler to get my head around and deploy. Also, a number of OpenStack installer scripts and how to guides have improved since the early OpenStack releases. For my deployment I used Red Hat’s RDO and packstack to deploy OpenStack Havana. From Red Hat, “RDO is a community of people using and deploying OpenStack on Red Hat Enterprise Linux, Fedora and distributions derived from these (such as CentOS, Scientific Linux and others).”. ...

January 19, 2014 · 4 min · Shane Cunningham