Multinode OpenStack with CentOS 6.6 in Virtual Box with GNS3 Part 1

In my last post, I did a simple All In One installation. Now we are moving to the next step and creating a multinode installation where I have the following three nodes.

  • Controller hosting neutron, cinder, glance, horizon and keystone
  • Compute1 hosting nova compute
  • Compute2 hosting nova compute

The steps to put all this together are long, so I am splitting this post into part 1 where I put together the infrastructure and part 2 where we will install OpenStack on the nodes.

The following instructions are for installing a multinode OpenStack Icehouse on CentOS 6.6 in VirtualBox and GNS3. For CentOS, I started with the CentOS 6.6 Minimal DVD to keep the footprint small. I am also using GNS3 version 1.3.2.

I created three Microsoft Loopback interfaces on my host PC and created a bridge with 2 of the loopbacks and my PC’s NIC. I renamed the looback interfaces to Loopback1, Loopback2, and Loopback3. To have the nodes have access to your network/Internet, I created a Bridge on my PC and added my PCs Ethernet port and also included Loopback1 and Loopback2. For Loopback3, I am going to use that as my management access from my PC into the OpenStack management network. Using Windows, configure Loopback3 with IP address 192.168.0.100/24.

Using VirtualBox, create 3 VMs.

Controller:

CPU: 4
RAM: 2048
HD: 50G preallocate
Adapter 1: Bridge to host PC
Hostname: controller.openstack

Compute1:

CPU: 4
RAM: 2048
HD: 50G preallocate
Adapter 1: Bridge to host PC
Hostname: compute1.openstack

Compute2:

CPU: 4
RAM: 2048
HD: 50G preallocate
Adapter 1: Bridge to host PC
Hostname: compute2.openstack

Once you have finished creating the VMs and installed CentOS, eth0 on each VM will be the servers public IP. Record them as you can SSH into to them to configure them

For the interfaces that will be used by the multinode setup, the following are the interfaces used in each node.

Controller: (4 interfaces)

eth0: Public IP
eth1: Mgmt IP: 192.168.0.10
eth2: Tenant VLAN
eth3: Tenant Public

Compute1: (3 interfaces)

eth0: Public IP
eth1: Mgmt IP: 192.168.0.1
eth2: Tenant VLAN

Compute2: (3 interfaces)

eth0: Public IP
eth1: Mgmt IP: 192.168.0.2
eth2: Tenant VLAN

Using GNS3 and some GNS3 Ethernet switches, we will create the following topology.
CentOS-OpenStack

I could have done all this with less switches, but I used separate switches for each segment to keep it more clear what was happening and also that GNS3 isn’t very good at drawing the link lines in a nice pattern. So here are the switches that will be used.

  • Tenant VLAN: This will be the switch that connects to the trunk ports on the nodes that use VLANs for tenant traffic isolation.
  • Mgmt: Ethernet network for OpenStack to use for managing all the nodes.
  • Tenant Public: Interface used by the tenants for internet access with floating IP addresses.
  • Server Public: Interface on each node for us to manage the nodes such as software installation and such.

Add the switches to the GNS3 canvas. Next, configure each switch as follows.

  • Tenant VLAN: configure all ports as type ‘dot1q’
  • Mgmt: Configure all port as type ‘access’ with VLAN 1.
  • Tenant Public: Configure all port as type ‘access’ with VLAN 1.
  • Server Public: Configure all port as type ‘access’ with VLAN 1.

In GNS3, import the three Virtual Box VMs. In the GNS3 preferences, under the VirtualBox VMs page, click on the New button and add each of the VMs that you created in VirtualBox. For each also configure the settings.
Controller:

General Settings:
	RAM: 2048
	Do not check any other boxes.
Network:
	 Adapters: 4
	 Type: Intel PRO/1000 MT Desktop
	 Check the box that says 'Allow GNS3 to use any configured VirtualBox adapter'.

Compute1:

General Settings:
	RAM: 2048
	Do not check any other boxes.
Network:
	 Adapters: 3
	 Type: Intel PRO/1000 MT Desktop
	 Check the box that says 'Allow GNS3 to use any configured VirtualBox adapter'.

Compute2:

General Settings:
	RAM: 2048
	Do not check any other boxes.
Network:
	 Adapters: 3
	 Type: Intel PRO/1000 MT Desktop
	 Check the box that says 'Allow GNS3 to use any configured VirtualBox adapter'.

Add the three OpenStack nodes to the GNS3 canvas.

Add two clouds to the GNS3 canvas

  • Name one Internet and add the Loopback1 and Loopback2 to the NIO Ethernet list.
  • Name the other cloud Mgmt Server and add Loopback3 to the NIO Ethernet list.

Now using the GNS3 Link creation, create the links from the switches to the Ethernet ports on the nodes.

Tenant VLAN switch:

1(dot1q) -> controller Ethernet2
2(dot1q) -> compute1 Ethernet2
3(dot1q) -> compute2 Ethernet2

Mgmt switch:

1(access vlan 1) -> controller Ethernet1
2(access vlan 1) -> compute1 Ethernet1
3(access vlan 1) -> compute2 Ethernet1
4(access vlan 1) -> Mgmt Server(Loopback3)

Tenant Public switch:

1(access vlan 1) -> Internet (Loopback2)
2(access vlan 1) -> controller Ethernet3

Server Public switch:

1(access vlan 1) -> Internet (Loopback1)
2(access vlan 1) -> controller Ethernet0
3(access vlan 1) -> compute1 Ethernet0
4(access vlan 1) -> compute2 Ethernet0

At this point, all the nodes, switches and links should all be created and should look something like the diagram above.

Now before we install OpenStack, we’ll need to get the VMs ready to install OpenStack. Start up all the VMs using GNS3(press the Play button) so all the interfaces get added and we have connectivity between the VMs.

On the controller, do the following.

Make sure the system is up to date.

yum update -y

Setup Mgmt interface

cat < /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=192.168.0.10
NETMASK=255.255.255.0
EOF

Update the /etc/hosts with the hostname of the nodes.

cat <> /etc/hosts
192.168.0.10    controller controller.openstack
192.168.0.1     compute1 compute1.openstack
192.168.0.2     compute2 compute2.openstack
EOF

Restart networking to pick up all the settings.

/etc/init.d/network restart

Add OpenStack repo to yum

yum install -y http://rdo.fedorapeople.org/openstack-icehouse/rdo-release-icehouse.rpm
yum install -y epel-release

Disable SELINIX

setenforce 0
sed -i "s/\(^SELINUX=\).*\$/\1disabled/" /etc/selinux/config

Now we will do similar setup on Compute node 1.

Make sure the system is up to date.

yum update -y

Setup Mgmt interface

cat < /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=192.168.0.1
NETMASK=255.255.255.0
EOF

Update the /etc/hosts with the hostname of the nodes.

cat <> /etc/hosts
192.168.0.10    controller controller.openstack
192.168.0.1     compute1 compute1.openstack
192.168.0.2     compute2 compute2.openstack
EOF

Restart networking to pick up all the settings.

/etc/init.d/network restart

Add OpenStack repo to yum

yum install -y http://rdo.fedorapeople.org/openstack-icehouse/rdo-release-icehouse.rpm
yum install -y epel-release

Disable SELINIX

setenforce 0
sed -i "s/\(^SELINUX=\).*\$/\1disabled/" /etc/selinux/config

And finally we will do similar setup on Compute node 2.

Make sure the system is up to date.

yum update -y

Setup Mgmt interface

cat < /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=192.168.0.2
NETMASK=255.255.255.0
EOF

Update the /etc/hosts with the hostname of the nodes.

cat <> /etc/hosts
192.168.0.10    controller controller.openstack
192.168.0.1     compute1 compute1.openstack
192.168.0.2     compute2 compute2.openstack
EOF

Restart networking to pick up all the settings.

/etc/init.d/network restart

## Add OpenStack repo to yum

yum install -y http://rdo.fedorapeople.org/openstack-icehouse/rdo-release-icehouse.rpm
yum install -y epel-release

## Disable SELINIX

setenforce 0
sed -i "s/\(^SELINUX=\).*\$/\1disabled/" /etc/selinux/config

Now, at this point, all the nodes are ready to have OpenStack installed. To make sure, you should be able to ping each node using its management IP address using both the IP address and the host names.

That’s it for this post. In part 2 of the post we’ll install OpenStack on all the nodes.

This post ‘Multinode OpenStack with CentOS in Virtual Box with GNS3 Part 1’ first appeared on https://techandtrains.com/.

All In One OpenStack Icehouse on CentOS 6.6 in VirtualBox

With so much great virtualization out there, I obviously had to take a look at OpenStack and see how it works and what interesting network things you can do with it. There are tons and tons of blog posts about installing OpenStack. But as with other posts, these are the steps that I used and I want them saved somewhere for safe keeping. I had lots of options for choosing a platform, but I ended up using CentOS since it is closer to other enterprise operating systems like RHEL and Oracle Linux.

I’m going to start simple and just try an All In One installation. The following instructions are for installing an all in one OpenStack Icehouse on CentOS 6.6 in VirtualBox. For CentOS, I started with the CentOS 6.6 Minimal DVD to keep the footprint small.

First step is to create the VM and install CentOS.
CPU: 4
RAM: 2048
HD: 50G preallocate

For my VM, I bridged the VM to my host adapter. My public VM IP is 192.160.10.161

Next, let’s make sure all our packages are up to date.

yum update -y
reboot

Add OpenStack Icehouse specific repo to yum

yum install -y http://rdo.fedorapeople.org/openstack-icehouse/rdo-release-icehouse.rpm

Install packages to get ready

yum install -y epel-release
yum install -y openstack-packstack wget screen

Modify packstack file to allow install on CentOS
Edit /usr/lib/python2.6/site-packages/packstack/plugins/serverprep_001.py

if config['HOST_DETAILS'][host]['os'] in ('Fedora', 'Unknown'):

TO

if config['HOST_DETAILS'][host]['os'] in ('Fedora', 'CentOS', 'Unknown'):

Install OpenStack using packstack. I’m not trying anything fancy yet, so I won’t install Swift of Ceilometer. This part can take a long time to run. So go grab a coffee or two or three.

packstack --install-hosts=127.0.0.1 --use-epel=n --provision-demo=n --os-swift-install=n --os-ceilometer-install=n

Become admin user

source ./keystonerc_admin

Let’s add an image to Glance

mkdir /tmp/images
wget -P /tmp/images http://cdn.download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img
glance image-create --name "cirros-0.3.3-x86_64" --file /tmp/images/cirros-0.3.3-x86_64-disk.img \
--disk-format qcow2 --container-format bare --is-public True --progress

Create a smaller flavor size since we are running on limited memory

nova flavor-create --is-public true m1.micro 6 256 2 1

Set VNC proxy to be from my VM IP address so I can open the console from my host PC.

openstack-config --set /etc/nova/nova.conf DEFAULT novncproxy_base_url http://192.168.10.161:6080/vnc_auto.html

Configure Nova to use Qemu instead of KVM since we are running inside a VM.

openstack-config --set /etc/nova/nova.conf libvirt virt_type qemu
openstack-config --set /etc/nova/nova.conf DEFAULT compute_driver libvirt.LibvirtDriver
setsebool -P virt_use_execmem on
ln -s /usr/libexec/qemu-kvm /usr/bin/qemu-system-x86_64
service libvirtd restart
service openstack-nova-compute restart

All done.  Now go create some Networks and Instances.

From this point, I just followed the exercises I found on the following page for creating networks and instances.
http://www.oracle.com/technetwork/systems/hands-on-labs/hol-openstack-linux-ovm-2399741.html

My next post coming up soon will be about how I got a mutlinode OpenStack running using VirtualBox.

This post ‘All In One OpenStack Icehouse on CentOS 6.6 in VirtualBox’ first appeared on https://techandtrains.com/.

Using Vagrant to Install Juniper Firefly Perimeter (vSRX) in VirtualBox and GNS3

In a previous post, I showed how to create a Virtual Box VM of a Juniper Firefly Perimeter.  It worked great, but some steps where quite difficult for some users and there seemed to be many that had issues with getting interfaces to appear and connect.  Also, converting VM disk images was a pain.  Fortunately, I found another extremely simple way to create the VM in only a few steps, and that was Vagrant.  I had not used Vagrant before today, so it was a bit of a learning curve about what this tool does, but turned out to be very simple for what I needed it for.  After installing Vagrant it was as simple as two Vagrant commands and I had my VM up and running.

Many of the steps below were taken from my previous post with a few minor modifications and the replacement of the VM creation steps.

Note: These instructions are run on this system.

1. Create a directory to store the vagrant files.

md "d:\VirtualBox VMs\vagrant\boxes\juniper.ffp-12.1X47-D15.4"
cd "d:\VirtualBox VMs\vagrant\boxes\juniper.ffp-12.1X47-D15.4"

2. Create the Juniper Firefly Perimeter VM

vagrant init juniper/ffp-12.1X47-D15.4
vagrant up

Once it finishes downloading, booting and configuring, it will print out the details on how you can SSH to the vSRX.  Try it out by logging in as root with default password Juniper.  Be patient, the configuration might take a minute to connect to the VM.

3. Turn off the vSRX VM.

root@% cli
root> request system power-off

4. If you look in Virtual Box now, you will see a new VM with a really long strange name.  This is your new vSRX VM.  First thing is to rename the VM to something more recognizable like juniper.ffp-12.1X47-D15.4.

Now to do some more testing, I am going to use GNS3 and add the vSRX in.

1. Start GNS3

2. Add the vSRX VM to the VirtualBox VM list in preferences.
Edit->Preferences->VirtualBox->VirtualBox VMs->New

3. Select the juniper.ffp-12.1X47-D15.4  VM from the list and click finish.

4. Choose the juniper.ffp-12.1X47-D15.4 in the VM list in the preferences and click on the Edit button

  • General settings:
    • Start VM in headless mode
  • Network:
    • Adapters: 4
    • Start at: 0
    • Type: ‘Paravirtualized Network (virt-io net)’

5. Add the juniper.ffp-12.1X47-D15.4 and four VPCS to the canvas.
vSRX e0(ge0/0/0.0) -> PC2 e0
vSRX e1(ge0/0/1.0) -> PC1 e0
vSRX e2(ge0/0/2.0) -> PC4 e0
vSRX e3(ge0/0/3.0) -> PC3 e0

vsrx-vagrant

6. Start the vSRX and connect to console.

7. Login and configure the interfaces. For this test, I am configuring ge-0/0/0 as the outside untrust interface(which is the config default) and the other three interfaces will be added to the trust zone.

 root@%
 root@% cli
 root> edit
 Entering configuration mode

[edit]
 root# delete interfaces ge-0/0/0 unit 0 family inet dhcp
 root# set interfaces ge-0/0/0 unit 0 family inet address 192.168.1.1/24
 root# set interfaces ge-0/0/1 unit 0 family inet address 192.168.2.1/24
 root# set interfaces ge-0/0/2 unit 0
 root# set interfaces ge-0/0/3 unit 0
 root# set system services web-management http interface ge-0/0/1.0
 root# set security zones security-zone trust host-inbound-traffic system-services http
 root# set security zones security-zone trust host-inbound-traffic system-services https
 root# set security zones security-zone trust host-inbound-traffic system-services ping
 root# set security zones security-zone trust host-inbound-traffic system-services ssh
 root# set security zones security-zone trust interfaces ge-0/0/1.0
 root# set security zones security-zone trust interfaces ge-0/0/2.0
 root# set security zones security-zone trust interfaces ge-0/0/3.0

9. Commit config

 root# commit

10. Configure two VPCS using their consoles

 PC1> ip 192.168.2.2 192.168.2.1 24
 PC2> ip 192.168.1.2 192.168.1.1 24

11. Test that PC1 can get out but PC2 can’t get in.

#PC1 on trust zone pinging out to PC2
PC1> ping 192.168.1.2
192.168.1.2 icmp_seq=1 ttl=63 time=0.500 ms
192.168.1.2 icmp_seq=2 ttl=63 time=0.500 ms

#PC2 on untrust zone pinging in to PC1
PC2> ping 192.168.2.2
192.168.2.2 icmp_seq=1 timeout
192.168.2.2 icmp_seq=2 timeout

So that is it for this alternative to creating a Juniper Firefly Perimeter (vSRX).  Hope this way is a lot less troublesome than the previous method of converting VM images.

This post ‘Installing Juniper Firefly (vSRX) in VirtualBox using Vagrant’ first appeared on https://techandtrains.com/.