In my last post, I did a simple All In One installation. Now we are moving to the next step and creating a multinode installation where I have the following three nodes.
- Controller hosting neutron, cinder, glance, horizon and keystone
- Compute1 hosting nova compute
- Compute2 hosting nova compute
The steps to put all this together are long, so I am splitting this post into part 1 where I put together the infrastructure and part 2 where we will install OpenStack on the nodes.
The following instructions are for installing a multinode OpenStack Icehouse on CentOS 6.6 in VirtualBox and GNS3. For CentOS, I started with the CentOS 6.6 Minimal DVD to keep the footprint small. I am also using GNS3 version 1.3.2.
I created three Microsoft Loopback interfaces on my host PC and created a bridge with 2 of the loopbacks and my PC’s NIC. I renamed the looback interfaces to Loopback1, Loopback2, and Loopback3. To have the nodes have access to your network/Internet, I created a Bridge on my PC and added my PCs Ethernet port and also included Loopback1 and Loopback2. For Loopback3, I am going to use that as my management access from my PC into the OpenStack management network. Using Windows, configure Loopback3 with IP address 192.168.0.100/24.
Using VirtualBox, create 3 VMs.
Controller:
CPU: 4
RAM: 2048
HD: 50G preallocate
Adapter 1: Bridge to host PC
Hostname: controller.openstack
Compute1:
CPU: 4
RAM: 2048
HD: 50G preallocate
Adapter 1: Bridge to host PC
Hostname: compute1.openstack
Compute2:
CPU: 4
RAM: 2048
HD: 50G preallocate
Adapter 1: Bridge to host PC
Hostname: compute2.openstack
Once you have finished creating the VMs and installed CentOS, eth0 on each VM will be the servers public IP. Record them as you can SSH into to them to configure them
For the interfaces that will be used by the multinode setup, the following are the interfaces used in each node.
Controller: (4 interfaces)
eth0: Public IP
eth1: Mgmt IP: 192.168.0.10
eth2: Tenant VLAN
eth3: Tenant Public
Compute1: (3 interfaces)
eth0: Public IP
eth1: Mgmt IP: 192.168.0.1
eth2: Tenant VLAN
Compute2: (3 interfaces)
eth0: Public IP
eth1: Mgmt IP: 192.168.0.2
eth2: Tenant VLAN
Using GNS3 and some GNS3 Ethernet switches, we will create the following topology.

I could have done all this with less switches, but I used separate switches for each segment to keep it more clear what was happening and also that GNS3 isn’t very good at drawing the link lines in a nice pattern. So here are the switches that will be used.
- Tenant VLAN: This will be the switch that connects to the trunk ports on the nodes that use VLANs for tenant traffic isolation.
- Mgmt: Ethernet network for OpenStack to use for managing all the nodes.
- Tenant Public: Interface used by the tenants for internet access with floating IP addresses.
- Server Public: Interface on each node for us to manage the nodes such as software installation and such.
Add the switches to the GNS3 canvas. Next, configure each switch as follows.
- Tenant VLAN: configure all ports as type ‘dot1q’
- Mgmt: Configure all port as type ‘access’ with VLAN 1.
- Tenant Public: Configure all port as type ‘access’ with VLAN 1.
- Server Public: Configure all port as type ‘access’ with VLAN 1.
In GNS3, import the three Virtual Box VMs. In the GNS3 preferences, under the VirtualBox VMs page, click on the New button and add each of the VMs that you created in VirtualBox. For each also configure the settings.
Controller:
General Settings:
RAM: 2048
Do not check any other boxes.
Network:
Adapters: 4
Type: Intel PRO/1000 MT Desktop
Check the box that says 'Allow GNS3 to use any configured VirtualBox adapter'.
Compute1:
General Settings:
RAM: 2048
Do not check any other boxes.
Network:
Adapters: 3
Type: Intel PRO/1000 MT Desktop
Check the box that says 'Allow GNS3 to use any configured VirtualBox adapter'.
Compute2:
General Settings:
RAM: 2048
Do not check any other boxes.
Network:
Adapters: 3
Type: Intel PRO/1000 MT Desktop
Check the box that says 'Allow GNS3 to use any configured VirtualBox adapter'.
Add the three OpenStack nodes to the GNS3 canvas.
Add two clouds to the GNS3 canvas
- Name one Internet and add the Loopback1 and Loopback2 to the NIO Ethernet list.
- Name the other cloud Mgmt Server and add Loopback3 to the NIO Ethernet list.
Now using the GNS3 Link creation, create the links from the switches to the Ethernet ports on the nodes.
Tenant VLAN switch:
1(dot1q) -> controller Ethernet2
2(dot1q) -> compute1 Ethernet2
3(dot1q) -> compute2 Ethernet2
Mgmt switch:
1(access vlan 1) -> controller Ethernet1
2(access vlan 1) -> compute1 Ethernet1
3(access vlan 1) -> compute2 Ethernet1
4(access vlan 1) -> Mgmt Server(Loopback3)
Tenant Public switch:
1(access vlan 1) -> Internet (Loopback2)
2(access vlan 1) -> controller Ethernet3
Server Public switch:
1(access vlan 1) -> Internet (Loopback1)
2(access vlan 1) -> controller Ethernet0
3(access vlan 1) -> compute1 Ethernet0
4(access vlan 1) -> compute2 Ethernet0
At this point, all the nodes, switches and links should all be created and should look something like the diagram above.
Now before we install OpenStack, we’ll need to get the VMs ready to install OpenStack. Start up all the VMs using GNS3(press the Play button) so all the interfaces get added and we have connectivity between the VMs.
On the controller, do the following.
Make sure the system is up to date.
yum update -y
Setup Mgmt interface
cat < /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=192.168.0.10
NETMASK=255.255.255.0
EOF
Update the /etc/hosts with the hostname of the nodes.
cat <> /etc/hosts
192.168.0.10 controller controller.openstack
192.168.0.1 compute1 compute1.openstack
192.168.0.2 compute2 compute2.openstack
EOF
Restart networking to pick up all the settings.
/etc/init.d/network restart
Add OpenStack repo to yum
yum install -y http://rdo.fedorapeople.org/openstack-icehouse/rdo-release-icehouse.rpm
yum install -y epel-release
Disable SELINIX
setenforce 0
sed -i "s/\(^SELINUX=\).*\$/\1disabled/" /etc/selinux/config
Now we will do similar setup on Compute node 1.
Make sure the system is up to date.
yum update -y
Setup Mgmt interface
cat < /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=192.168.0.1
NETMASK=255.255.255.0
EOF
Update the /etc/hosts with the hostname of the nodes.
cat <> /etc/hosts
192.168.0.10 controller controller.openstack
192.168.0.1 compute1 compute1.openstack
192.168.0.2 compute2 compute2.openstack
EOF
Restart networking to pick up all the settings.
/etc/init.d/network restart
Add OpenStack repo to yum
yum install -y http://rdo.fedorapeople.org/openstack-icehouse/rdo-release-icehouse.rpm
yum install -y epel-release
Disable SELINIX
setenforce 0
sed -i "s/\(^SELINUX=\).*\$/\1disabled/" /etc/selinux/config
And finally we will do similar setup on Compute node 2.
Make sure the system is up to date.
yum update -y
Setup Mgmt interface
cat < /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=192.168.0.2
NETMASK=255.255.255.0
EOF
Update the /etc/hosts with the hostname of the nodes.
cat <> /etc/hosts
192.168.0.10 controller controller.openstack
192.168.0.1 compute1 compute1.openstack
192.168.0.2 compute2 compute2.openstack
EOF
Restart networking to pick up all the settings.
/etc/init.d/network restart
## Add OpenStack repo to yum
yum install -y http://rdo.fedorapeople.org/openstack-icehouse/rdo-release-icehouse.rpm
yum install -y epel-release
## Disable SELINIX
setenforce 0
sed -i "s/\(^SELINUX=\).*\$/\1disabled/" /etc/selinux/config
Now, at this point, all the nodes are ready to have OpenStack installed. To make sure, you should be able to ping each node using its management IP address using both the IP address and the host names.
That’s it for this post. In part 2 of the post we’ll install OpenStack on all the nodes.
This post ‘Multinode OpenStack with CentOS in Virtual Box with GNS3 Part 1’ first appeared on https://techandtrains.com/.