$ git clone https://github.com/openstack-dev/devstack.git
$ cd devstack
$ ./stack.sh
$ find /dev/ -type b | grep vd
/dev/vda
$ find /dev/ -type b | grep vd
/dev/vda
/dev/vdb <--- There you are!
$ sudo fdisk /dev/vdb
...
Command (m for help): p
Disk /dev/vdb: 1073 MB, 1073741824 bytes
...
Command (m for help): n
...
Select (default p): <Enter>
Partition number (1-4, default 1): <Enter>
First sector (2048-2097151, default 2048): <Enter>
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-2097151, default 2097151): <Enter>
Using default value 2097151
Command (m for help): p
...
Device Boot Start End Blocks Id System
/dev/vdb1 2048 2097151 1047552 83 Linux
Command (m for help): w
$ sudo mkfs.ext4 /dev/vdb1
...
$ sudo mount /dev/vdb1 /mnt
$ mount
/dev/vdb1 on /mnt type ext4 (...)
$ cd devstack
$ ./unstack.sh
Add the following into localrc
file
# enable neutron
disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service neutron
$ ./stack.sh
OpenStack Havana released on 10/17, 2013, so we're moving on to Havana. Before started the main exercises, there are a few things we need to do first.
First, stop devstack and pull down the latest devstack
$ cd devstack
$ ./unstack.sh
$ git pull
Second, upgrade openvswitch. The version of openvswitch package in Ubuntu 12.04 is too old to work properly with the latest Neutron.
$ echo 'deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/havana main' | sudo tee /etc/apt/sources.list.d/cloud-archive.list
$ sudo apt-get update
$ sudo apt-get install openvswitch-common openvswitch-datapath-dkms openvswitch-switch
$ sudo reboot
Normally, a production OpenStack platform would consist of three kinds of nodes, controller node(s), network node(s) and compute nodes. In the previous exercise, we have built up a single node OpenStack setup. Let's call this node the controller node in this document, though it also acts as a network node and a compute node. And now, we're going to add an extra compute node.
Modify your localrc like this, and replace <replaceme> with your settings.
# localrc
DATABASE_PASSWORD=<password>
RABBIT_PASSWORD=<password>
SERVICE_TOKEN=<password>
SERVICE_PASSWORD=<password>
ADMIN_PASSWORD=<password>
# enable neutron
disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service neutron
MULTI_HOST=1
Q_PLUGIN=ml2
ENABLE_TENANT_TUNNELS=True
HOST_IP=<ip address of this node>
VNCSERVER_LISTEN=$HOST_IP
VNCSERVER_PROXYCLIENT_ADDRESS=$HOST_IP
And start devstack
$ cd devstack
$ ./stack.sh
The host environment of the extra compute node is just the same as the original node. It depends on how you want it to be setup. For example, if you install the controller node as an VM, you can simply clone a new one from it, and replace the ip and hostname with a different one. Also, don't forget to upgrade the openvswitch just like what we did above for the controller node.
Modify your localrc like this, and replace <replaceme> with your settings.
# localrc
DATABASE_PASSWORD=<password>
RABBIT_PASSWORD=<password>
SERVICE_TOKEN=<password>
SERVICE_PASSWORD=<password>
ADMIN_PASSWORD=<password>
MULTI_HOST=1
SERVICE_HOST=<ip address of the controller node>
GLANCE_HOSTPORT=$SERVICE_HOST:9292
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
Q_HOST=$SERVICE_HOST
MATCHMAKER_REDIS_HOST=$SERVICE_HOST
ENABLED_SERVICES=n-cpu,n-api,n-novnc,rabbit,q-agt,neutron
Q_PLUGIN=ml2
ENABLE_TENANT_TUNNELS=True
HOST_IP=<ip address of this node>
VNCSERVER_LISTEN=$HOST_IP
VNCSERVER_PROXYCLIENT_ADDRESS=$HOST_IP
And start devstack
$ cd devstack
$ ./stack.sh