Multinode OpenStack with Oracle Linux 6.6 in Virtual Box with GNS3

In a previous post, I was showing how to setup a CentOS Multinode Openstack.  In this post, I’ll show the same thing except I’ll use Oracle Linux 6.6. OpenStack on Oracle Linux has a few different variants, but in this post, I’ll use Oracle Linux as the Compute Nodes.  An alternate would be to use OVM which is Xen based as the Compute Nodes.  I’m going to skip the steps for getting GNS3 setup since it is the same as my other post at part 1. When you get to the section where you start to configure the eth1 interfaces and the yum repos, stop there and come back here.

The instructions are very similar to our previous setup except for a few changes and command line argument differences. As a reminder, here is the setup we built.
screenshot

On the controller, do the following.

Setup Mgmt interface

cat < /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=192.168.0.10
NETMASK=255.255.255.0
EOF

Update the /etc/hosts with the hostname of the nodes.

cat <> /etc/hosts
192.168.0.10    controller controller.openstack
192.168.0.1     compute1 compute1.openstack
192.168.0.2     compute2 compute2.openstack
EOF

Restart networking to pick up all the settings.

/etc/init.d/network restart

Add OpenStack repo to yum

cd /etc/yum.repos.d
wget http://public-yum.oracle.com/public-yum-openstack-ol6.repo
cd

Disable SELINIX

setenforce 0
sed -i "s/\(^SELINUX=\).*\$/\1disabled/" /etc/selinux/config
yum -y update
reboot

Now we will do similar setup on Compute node 1.

Setup Mgmt interface

cat < /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=192.168.0.1
NETMASK=255.255.255.0
EOF

Update the /etc/hosts with the hostname of the nodes.

cat <> /etc/hosts
192.168.0.10    controller controller.openstack
192.168.0.1     compute1 compute1.openstack
192.168.0.2     compute2 compute2.openstack
EOF

Restart networking to pick up all the settings.

/etc/init.d/network restart

Add OpenStack repo to yum

cd /etc/yum.repos.d
wget http://public-yum.oracle.com/public-yum-openstack-ol6.repo
cd

Disable SELINIX

setenforce 0
sed -i "s/\(^SELINUX=\).*\$/\1disabled/" /etc/selinux/config
yum -y update
reboot

And finally we will do similar setup on Compute node 2.

Setup Mgmt interface

cat < /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=192.168.0.2
NETMASK=255.255.255.0
EOF

Update the /etc/hosts with the hostname of the nodes.

cat <> /etc/hosts
192.168.0.10    controller controller.openstack
192.168.0.1     compute1 compute1.openstack
192.168.0.2     compute2 compute2.openstack
EOF

Restart networking to pick up all the settings.

/etc/init.d/network restart

Add OpenStack repo to yum

cd /etc/yum.repos.d
wget http://public-yum.oracle.com/public-yum-openstack-ol6.repo
cd

Disable SELINIX

setenforce 0
sed -i "s/\(^SELINUX=\).*\$/\1disabled/" /etc/selinux/config
yum -y update
reboot

On the controller node, install the packstack utility and a few extra tools.

yum install -y openstack-packstack wget screen

Install OpenStack. I’m going to use screen since this next step can take a while and I am accessing the controller over SSH. But if you are running these directly on the console, you don’t need to use screen. This command will take a long time to run, go take a break. Note that all output of the command will be captured in the pack.log in case you need to see what was output. Also, it will ask for the root password for each node. Enter them.

screen
packstack --install-hosts=192.168.0.10,192.168.0.1,192.168.0.2 --neutron-ovs-tenant-network-type=vlan \
--neutron-ovs-vlan-ranges=default:1000:2000 --neutron-ovs-bridge-mappings=default:br-eth2 \
--neutron-ovs-bridge-interfaces=br-eth2:eth2 \
--novavncproxy-hosts=192.168.10.157 --nagios-install=y 2>&1 | /usr/bin/tee ~/pack.log

Do final nova and neutron network setup by linking the network node’s eth3 to your Tenant Public network.

ovs-vsctl add-port br-ex eth3
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata true
service neutron-dhcp-agent restart

Next we need to do some patching on the control node.

  • Edit /etc/openstack-dashboard/local_settings and change the allowed hosts
  • ALLOWED_HOSTS = ['*', ]
  • Edit the /etc/httpd/conf.d/15-horizon_vhost.conf and add a ServerAlias for your public IP.
  •   ServerAlias 192.168.10.157

And reboot.

Apply a workaround to fix broken upstream changes for compatibility with OpenStack

rpm -qa | grep dnsmasq
  If the value is higher than 2.48-13.el6.x86_64, you need to downgrade using the following command, possibly more than once.
yum downgrade -y dnsmasq dnsmasq-utils

Similar downgrades will also need to be done on both compute nodes and for both dnsmasq and kvm.
Compute1 and Compute2:

rpm -qa | grep qemu-kvm
  If the value is higher than 0.12.1.2-2.415.el6_5.14.x86_64, you need to downgrade. Possibly more than once.
yum downgrade -y qemu-kvm qemu-img
rpm -qa | grep dnsmasq
  If the value is higher than 2.48-13.el6.x86_64, you need to downgrade using the following command, possibly more than once.
yum downgrade -y dnsmasq dnsmasq-utils

Back in the control node, lets add an image. Become admin user

source ./keystonerc_admin

Let’s add an image to Glance

mkdir /tmp/images
wget -P /tmp/images http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img
glance image-create --name "cirros-0.3.3-x86_64" --file /tmp/images/cirros-0.3.3-x86_64-disk.img \
  --disk-format qcow2 --container-format bare --is-public True --progress

Create a smaller flavor size since we are running on limited memory

nova flavor-create --is-public true m1.micro 6 256 2 1

On Compute node 1, we need to do a few steps to configure it to use Qemu instead of KVM since we are running inside a VM already and VirtualBox does not support nested VMs.

openstack-config --set /etc/nova/nova.conf libvirt virt_type qemu
openstack-config --set /etc/nova/nova.conf DEFAULT compute_driver libvirt.LibvirtDriver
setsebool -P virt_use_execmem on
ln -s /usr/libexec/qemu-kvm /usr/bin/qemu-system-x86_64
reboot

And lastly, we will setup Compute node 2 with Qemu as well.

openstack-config --set /etc/nova/nova.conf libvirt virt_type qemu
openstack-config --set /etc/nova/nova.conf DEFAULT compute_driver libvirt.LibvirtDriver
setsebool -P virt_use_execmem on
ln -s /usr/libexec/qemu-kvm /usr/bin/qemu-system-x86_64
reboot

There we have it, our multinode OpenStack is up and running and you can now start adding instances and networks. To access the OpenStack Horizon dashboard from your PC, open a browser and go to the following URL. Login as ‘admin’ and use the password found in the file keystonerc_admin.
http://192.168.0.10/dashboard

This post ‘Multinode OpenStack with Oracle Linux 6.6 in Virtual Box with GNS3’ first appeared on https://techandtrains.com/.

Multinode OpenStack with CentOS 6.6 in Virtual Box with GNS3 Part 2

This is the following to the part 1 of the article where I was setting up all the nodes in preparation for intalling OpenStack. Now that all the preparation work has been completed, let’s get started with installing OpenStack.

On the controller node, install the packstack utility and a few extra tools.

yum install -y openstack-packstack wget screen

Modify packstack file to allow install on CentOS

vi /usr/lib/python2.6/site-packages/packstack/plugins/serverprep_001.py

Find the following line

if config['HOST_DETAILS'][host]['os'] in ('Fedora', 'Unknown'):

and change it to

if config['HOST_DETAILS'][host]['os'] in ('Fedora', 'CentOS', 'Unknown'):

Install OpenStack. I’m going to use screen since this next step can take a while and I am accessing the controller over SSH. But if you are running these directly on the console, you don’t need to use screen. This command will take a long time to run, go take a break. Note that all output of the command will be captured in the pack.log in case you need to see what was output. Also, it will ask for the root password for each node. Enter them.

screen
packstack --install-hosts=192.168.0.10,192.168.0.1,192.168.0.2 --os-neutron-ovs-tenant-network-type=vlan \
--os-neutron-ovs-vlan-ranges=default:1000:2000 --os-neutron-ovs-bridge-mappings=default:br-eth2 \
--os-neutron-ovs-bridge-interfaces=br-eth2:eth2 --use-epel=n --provision-demo=n \
--os-swift-install=n --os-ceilometer-install=n \
2>&1 | /usr/bin/tee ~/pack.log

Do final nova and neutron network setup by linking the network node’s eth3 to your Tenant Public network.

ovs-vsctl add-port br-ex eth3
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata true
service neutron-dhcp-agent restart

Become admin user

source ./keystonerc_admin

Let’s add an image to Glance

mkdir /tmp/images
wget -P /tmp/images http://cdn.download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img
glance image-create --name "cirros-0.3.3-x86_64" --file /tmp/images/cirros-0.3.3-x86_64-disk.img \
  --disk-format qcow2 --container-format bare --is-public True --progress

Create a smaller flavor size since we are running on limited memory

nova flavor-create --is-public true m1.micro 6 256 2 1

On Compute node 1, we need to do a few steps to configure it to use Qemu instead of KVM since we are running inside a VM already and VirtualBox does not support nested VMs.

openstack-config --set /etc/nova/nova.conf libvirt virt_type qemu
openstack-config --set /etc/nova/nova.conf DEFAULT compute_driver libvirt.LibvirtDriver
setsebool -P virt_use_execmem on
ln -s /usr/libexec/qemu-kvm /usr/bin/qemu-system-x86_64
service libvirtd restart
service openstack-nova-compute restart

And lastly, we will setup Compute node 2 with Qemu as well.

openstack-config --set /etc/nova/nova.conf libvirt virt_type qemu
openstack-config --set /etc/nova/nova.conf DEFAULT compute_driver libvirt.LibvirtDriver
setsebool -P virt_use_execmem on
ln -s /usr/libexec/qemu-kvm /usr/bin/qemu-system-x86_64
service libvirtd restart
service openstack-nova-compute restart

There we have it, our multinode OpenStack is up and running and you can now start adding instances and networks. To access the OpenStack Horizon dashboard from your PC, open a browser and go to the following URL. Login as ‘admin’ and use the password found in the file keystonerc_admin.
http://192.168.0.10/dashboard

Well, hope you enjoyed the journey as much as I did. It was quite a learning experience to put this together. In later posts, I hope to show how I added more compute nodes, compute nodes of different hypervisor types and adding an external block storage node.

This post ‘Multinode OpenStack with CentOS in Virtual Box with GNS3 Part 2’ first appeared on https://techandtrains.com/.

Multinode OpenStack with CentOS 6.6 in Virtual Box with GNS3 Part 1

In my last post, I did a simple All In One installation. Now we are moving to the next step and creating a multinode installation where I have the following three nodes.

  • Controller hosting neutron, cinder, glance, horizon and keystone
  • Compute1 hosting nova compute
  • Compute2 hosting nova compute

The steps to put all this together are long, so I am splitting this post into part 1 where I put together the infrastructure and part 2 where we will install OpenStack on the nodes.

The following instructions are for installing a multinode OpenStack Icehouse on CentOS 6.6 in VirtualBox and GNS3. For CentOS, I started with the CentOS 6.6 Minimal DVD to keep the footprint small. I am also using GNS3 version 1.3.2.

I created three Microsoft Loopback interfaces on my host PC and created a bridge with 2 of the loopbacks and my PC’s NIC. I renamed the looback interfaces to Loopback1, Loopback2, and Loopback3. To have the nodes have access to your network/Internet, I created a Bridge on my PC and added my PCs Ethernet port and also included Loopback1 and Loopback2. For Loopback3, I am going to use that as my management access from my PC into the OpenStack management network. Using Windows, configure Loopback3 with IP address 192.168.0.100/24.

Using VirtualBox, create 3 VMs.

Controller:

CPU: 4
RAM: 2048
HD: 50G preallocate
Adapter 1: Bridge to host PC
Hostname: controller.openstack

Compute1:

CPU: 4
RAM: 2048
HD: 50G preallocate
Adapter 1: Bridge to host PC
Hostname: compute1.openstack

Compute2:

CPU: 4
RAM: 2048
HD: 50G preallocate
Adapter 1: Bridge to host PC
Hostname: compute2.openstack

Once you have finished creating the VMs and installed CentOS, eth0 on each VM will be the servers public IP. Record them as you can SSH into to them to configure them

For the interfaces that will be used by the multinode setup, the following are the interfaces used in each node.

Controller: (4 interfaces)

eth0: Public IP
eth1: Mgmt IP: 192.168.0.10
eth2: Tenant VLAN
eth3: Tenant Public

Compute1: (3 interfaces)

eth0: Public IP
eth1: Mgmt IP: 192.168.0.1
eth2: Tenant VLAN

Compute2: (3 interfaces)

eth0: Public IP
eth1: Mgmt IP: 192.168.0.2
eth2: Tenant VLAN

Using GNS3 and some GNS3 Ethernet switches, we will create the following topology.
CentOS-OpenStack

I could have done all this with less switches, but I used separate switches for each segment to keep it more clear what was happening and also that GNS3 isn’t very good at drawing the link lines in a nice pattern. So here are the switches that will be used.

  • Tenant VLAN: This will be the switch that connects to the trunk ports on the nodes that use VLANs for tenant traffic isolation.
  • Mgmt: Ethernet network for OpenStack to use for managing all the nodes.
  • Tenant Public: Interface used by the tenants for internet access with floating IP addresses.
  • Server Public: Interface on each node for us to manage the nodes such as software installation and such.

Add the switches to the GNS3 canvas. Next, configure each switch as follows.

  • Tenant VLAN: configure all ports as type ‘dot1q’
  • Mgmt: Configure all port as type ‘access’ with VLAN 1.
  • Tenant Public: Configure all port as type ‘access’ with VLAN 1.
  • Server Public: Configure all port as type ‘access’ with VLAN 1.

In GNS3, import the three Virtual Box VMs. In the GNS3 preferences, under the VirtualBox VMs page, click on the New button and add each of the VMs that you created in VirtualBox. For each also configure the settings.
Controller:

General Settings:
	RAM: 2048
	Do not check any other boxes.
Network:
	 Adapters: 4
	 Type: Intel PRO/1000 MT Desktop
	 Check the box that says 'Allow GNS3 to use any configured VirtualBox adapter'.

Compute1:

General Settings:
	RAM: 2048
	Do not check any other boxes.
Network:
	 Adapters: 3
	 Type: Intel PRO/1000 MT Desktop
	 Check the box that says 'Allow GNS3 to use any configured VirtualBox adapter'.

Compute2:

General Settings:
	RAM: 2048
	Do not check any other boxes.
Network:
	 Adapters: 3
	 Type: Intel PRO/1000 MT Desktop
	 Check the box that says 'Allow GNS3 to use any configured VirtualBox adapter'.

Add the three OpenStack nodes to the GNS3 canvas.

Add two clouds to the GNS3 canvas

  • Name one Internet and add the Loopback1 and Loopback2 to the NIO Ethernet list.
  • Name the other cloud Mgmt Server and add Loopback3 to the NIO Ethernet list.

Now using the GNS3 Link creation, create the links from the switches to the Ethernet ports on the nodes.

Tenant VLAN switch:

1(dot1q) -> controller Ethernet2
2(dot1q) -> compute1 Ethernet2
3(dot1q) -> compute2 Ethernet2

Mgmt switch:

1(access vlan 1) -> controller Ethernet1
2(access vlan 1) -> compute1 Ethernet1
3(access vlan 1) -> compute2 Ethernet1
4(access vlan 1) -> Mgmt Server(Loopback3)

Tenant Public switch:

1(access vlan 1) -> Internet (Loopback2)
2(access vlan 1) -> controller Ethernet3

Server Public switch:

1(access vlan 1) -> Internet (Loopback1)
2(access vlan 1) -> controller Ethernet0
3(access vlan 1) -> compute1 Ethernet0
4(access vlan 1) -> compute2 Ethernet0

At this point, all the nodes, switches and links should all be created and should look something like the diagram above.

Now before we install OpenStack, we’ll need to get the VMs ready to install OpenStack. Start up all the VMs using GNS3(press the Play button) so all the interfaces get added and we have connectivity between the VMs.

On the controller, do the following.

Make sure the system is up to date.

yum update -y

Setup Mgmt interface

cat < /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=192.168.0.10
NETMASK=255.255.255.0
EOF

Update the /etc/hosts with the hostname of the nodes.

cat <> /etc/hosts
192.168.0.10    controller controller.openstack
192.168.0.1     compute1 compute1.openstack
192.168.0.2     compute2 compute2.openstack
EOF

Restart networking to pick up all the settings.

/etc/init.d/network restart

Add OpenStack repo to yum

yum install -y http://rdo.fedorapeople.org/openstack-icehouse/rdo-release-icehouse.rpm
yum install -y epel-release

Disable SELINIX

setenforce 0
sed -i "s/\(^SELINUX=\).*\$/\1disabled/" /etc/selinux/config

Now we will do similar setup on Compute node 1.

Make sure the system is up to date.

yum update -y

Setup Mgmt interface

cat < /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=192.168.0.1
NETMASK=255.255.255.0
EOF

Update the /etc/hosts with the hostname of the nodes.

cat <> /etc/hosts
192.168.0.10    controller controller.openstack
192.168.0.1     compute1 compute1.openstack
192.168.0.2     compute2 compute2.openstack
EOF

Restart networking to pick up all the settings.

/etc/init.d/network restart

Add OpenStack repo to yum

yum install -y http://rdo.fedorapeople.org/openstack-icehouse/rdo-release-icehouse.rpm
yum install -y epel-release

Disable SELINIX

setenforce 0
sed -i "s/\(^SELINUX=\).*\$/\1disabled/" /etc/selinux/config

And finally we will do similar setup on Compute node 2.

Make sure the system is up to date.

yum update -y

Setup Mgmt interface

cat < /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=192.168.0.2
NETMASK=255.255.255.0
EOF

Update the /etc/hosts with the hostname of the nodes.

cat <> /etc/hosts
192.168.0.10    controller controller.openstack
192.168.0.1     compute1 compute1.openstack
192.168.0.2     compute2 compute2.openstack
EOF

Restart networking to pick up all the settings.

/etc/init.d/network restart

## Add OpenStack repo to yum

yum install -y http://rdo.fedorapeople.org/openstack-icehouse/rdo-release-icehouse.rpm
yum install -y epel-release

## Disable SELINIX

setenforce 0
sed -i "s/\(^SELINUX=\).*\$/\1disabled/" /etc/selinux/config

Now, at this point, all the nodes are ready to have OpenStack installed. To make sure, you should be able to ping each node using its management IP address using both the IP address and the host names.

That’s it for this post. In part 2 of the post we’ll install OpenStack on all the nodes.

This post ‘Multinode OpenStack with CentOS in Virtual Box with GNS3 Part 1’ first appeared on https://techandtrains.com/.