The devstack project is an interesting tool to deploy a complete OpenStack development environment from source code. I’ve been using it from one year in my development activities on Neutron. For that, I set up an isolated VM (libvirt or VirtalBox through Vagrant), but I found that environment isn’t very efficient (too slow and uses lot of memory). Furthermore, when you developing network stack, you need to set up more than one node (three is a good number to test network virtualization). This is why I try to set up that environment into containers (LXC).
Setup LXC environment
I work on Debian testing release (Jessie) which uses the kernel 3.11 release. devstack supports Debian distribution thanks to the Emilien Macchi’s commit. I use LXC experimental Debian package to be able to use the new template ‘lxc-debconfig’ which is more efficient when deploying a Debian container and; it can use preseed config file to set up the container:
$ sudo 'deb http://ftp.debian.org/debian experimental main' > /etc/apt/sources.list.d/experimental.list
$ apt-get update
$ apt-get -t experimental install -y lxc lxc-stuff
You also need to mount cgroup every time you boot by adding line into /etc/fstab:
$ sudo echo 'cgroup /sys/fs/cgroup cgroup defaults 0 0' >> /etc/fstab
$ sudo mount /sys/fs/cgroup
Then check if LXC is correctly installed and supported by your kernel:
$ sudo lxc-checkconfig
Kernel configuration not found at /proc/config.gz; searching...
Kernel configuration found at /boot/config-3.11-2-amd64
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: missing
Network namespace: enabled
Multiple /dev/pts instances: enabled
--- Control groups ---
Cgroup: enabled
Cgroup clone_children flag: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled
--- Misc ---
Veth pair device: enabled
Macvlan: enabled
Vlan: enabled
File capabilities: enabled
Note : Before booting a new kernel, you can check its configuration
usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig
Note: Debian testing kernel (3.11) does not yet support CONFIG_USER_NS kernel option.
Then, you can create your first RootFS with Debian template:
$ sudo lxc-create -n container-template -t debian
You can specify a preseed file to configure the installation. I choose the ‘Debian GNU/Linux “jessie”’ distribution and answer the questions according to convenience. By default, ‘debconfig’ template ask for a bridge name to plug the first container interface. If you have libvirt installed on your host, by default a NATed network is created. So you can provide the created default network bridge (named ‘virbr0’ by default) to have Internet network access into the container.
Now, the container is ready. I changed some configurations into that container to be able to mount host folder where I put all OpenStack cloned source and to be able to run libvirt, create tun interface, access some devices (/dev/mem, /dev/kvm, /dev/net/tun) and create network namespaces into that container (this LXC config still have some problems with OpenStack, especially with Cinder and tgt set up which I don’t use):
$ sudo cat container-template/config
# Template used to create this container: /usr/share/lxc/templates/lxc-debian
# Parameters passed to the template:
lxc.network.type = empty
lxc.rootfs = /var/lib/lxc/container-template/rootfs
# /var/lib/lxc/container-template/config
## Container
lxc.utsname = container-template
lxc.rootfs = /var/lib/lxc/container-template/rootfs
lxc.arch = x86_64
#lxc.console = /var/log/lxc/container-template.console
lxc.tty = 6
lxc.pts = 1024
## Capabilities
lxc.cap.drop = audit_control
lxc.cap.drop = audit_write
lxc.cap.drop = linux_immutable
lxc.cap.drop = mac_admin
lxc.cap.drop = mac_override
# To be able libvirt start and identify qemu version
#lxc.cap.drop = setpcap
# To enable to add netns
#lxc.cap.drop = sys_admin
lxc.cap.drop = sys_module
lxc.cap.drop = sys_pacct
# To access /dev/mem
#lxc.cap.drop = sys_rawio
lxc.cap.drop = sys_time
## Devices
lxc.kmsg = 0
lxc.autodev = 0
# Allow all devices
#lxc.cgroup.devices.allow = a
# Deny all devices
lxc.cgroup.devices.deny = a
# Allow to mknod all devices (but not using them)
lxc.cgroup.devices.allow = c *:* m
lxc.cgroup.devices.allow = b *:* m
# /dev/mem
lxc.cgroup.devices.allow = c 1:1 rwm
# dev/input/mice
lxc.cgroup.devices.allow = c 13:63 rwm
# /dev/console
lxc.cgroup.devices.allow = c 5:1 rwm
# /dev/full
lxc.cgroup.devices.allow = c 1:7 rwm
# /dev/fuse
lxc.cgroup.devices.allow = c 10:229 rwm
# /dev/hpet
#lxc.cgroup.devices.allow = c 10:228 rwm
# /dev/kvm
lxc.cgroup.devices.allow = c 10:232 rwm
# /dev/loop*
#lxc.cgroup.devices.allow = b 7:* rwm
# /dev/loop-control
#lxc.cgroup.devices.allow = c 10:237 rwm
# /dev/null
lxc.cgroup.devices.allow = c 1:3 rwm
# /dev/ptmx
lxc.cgroup.devices.allow = c 5:2 rwm
# /dev/pts/*
lxc.cgroup.devices.allow = c 136:* rwm
# /dev/random
lxc.cgroup.devices.allow = c 1:8 rwm
# /dev/rtc
lxc.cgroup.devices.allow = c 254:0 rm
# /dev/tty
lxc.cgroup.devices.allow = c 5:0 rwm
# /dev/urandom
lxc.cgroup.devices.allow = c 1:9 rwm
# /dev/zero
lxc.cgroup.devices.allow = c 1:5 rwm
# /dev/net/tun
lxc.cgroup.devices.allow = c 10:200 rwm
## Limits
#lxc.cgroup.cpu.shares = 1024
#lxc.cgroup.cpuset.cpus = 0
#lxc.cgroup.memory.limit_in_bytes = 256M
#lxc.cgroup.memory.memsw.limit_in_bytes = 1G
## Filesystem
lxc.mount.entry = proc proc proc rw,nodev,noexec,nosuid 0 0
lxc.mount.entry = sysfs sys sysfs rw 0 0
#lxc.mount.entry = /srv/container-template /var/lib/lxc/container-template/rootfs/srv/container-template none defaults,bind 0 0
lxc.mount.entry = /home/doude/OpenStack opt/stack none defaults,bind 0 0
## Public network (NATed to internet)
lxc.network.type = veth
lxc.network.flags = up
lxc.network.hwaddr = 00:FF:2E:BE:57:F7
lxc.network.link = virbr0
lxc.network.name = eth0
# Private network (isolated)
lxc.network.type = veth
lxc.network.flags = up
lxc.network.hwaddr = 00:EE:2E:BE:57:F7
lxc.network.link = virbr1
lxc.network.name = eth1
Note: I configure the container to mount proc and sysfs in read write mode whereas usually LXC mounts them in read only mode. I can afford to do that because I’m in developing and testing environment.
I also add a new network interface which I plug to an isolated network (created with libvirt). I will use that network to transport Neutron tunneling networks between nodes (controller and compute). The NATed network will be my provider network (external network for router’s gateway and floating IP pool).
You also can see the mounting entry which will bind a shared folder from the host to all my containers. I plan to place all OpenStack git code repository in that folder. So, all my OpenStack nodes will share efficiently the same source code base where I can develop from the host with my favorite IDE.
Before starting the container, I create the mounting destination folder into container’s RootFS:
$ sudo mkdir container-template/rootfs/opt/stack
$ sudo chown 1000:1000 container-template/rootfs/opt/stack
If you doesn’t provide root password or user (during the container installation or through a preseed file), you can easily chrooted into the RootFS of the container and set the root password:
$ sudo chroot container-template/rootfs/
root@container:/# passwd
...
We also need to create some device:
root@container:/# mkdir /dev/net
root@container:/# mknod /dev/net/tun c 10 200
root@container:/# chmod 666 /dev/net/tun
root@container:/# exit
$
Then, I can start the template container:
$ sudo lxc-start -n container-template -d
And access to the console with command:
$ sudo lxc-console -n container-template
You can leave the console with ctrl+a q.
Now, I can log with root or user account and create a user if needed (devstack needs one). Then I configure all my post personal configurations.
Note: I recommend to set a local package cache on your host (like apt-cacher) and set it as apt proxy for your containers and host to save some Internet bandwidth.
Create the devstack controller container
When my container template is ready, I can clone it to create the devstack controller node:
$ sudo lxc-clone lxc-template controller
Created container controller as copy of lxc-template
I disable DHCP IP auto configuration onto that new container to set a static IP associated to the name ‘controller’.
$ cat /etc/network/interfaces
# Used by ifup(8) and ifdown(8). See the interfaces(5) manpage or
# /usr/share/doc/ifupdown/examples for more information.
auto lo
iface lo inet loopback
# Public network
auto eth0
#iface eth0 inet dhcp
iface eth0 inet static
address 192.168.122.200/24
gateway 192.168.122.1
dns-nameservers 192.168.122.1
# Private network
auto eth1
iface eth1 inet static
address 192.168.200.200/24
Now, I can install and start devstack controller node:
$ git clone https://github.com/openstack-dev/devstack.git
$ cd devstack
$ cat localrc
disable_service n-net
disable_service n-obj
disable_service tempest
disable_service cinder
disable_service c-api
disable_service c-vol
disable_service c-sch
#disable_service n-novnc
enable_service horizon
disable_service n-xvnc
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service neutron
#enable_service n-spice
enable_service n-vnc
ADMIN_PASSWORD=admin
MYSQL_PASSWORD=password
RABBIT_PASSWORD=guest
SERVICE_PASSWORD=password
SERVICE_TOKEN=tokentoken
ROOTSLEEP=0
RECLONE=True
#RECLONE=False
#OFFLINE=True
DATA_DIR=$TOP_DIR/data
SCREEN_LOGDIR=/opt/stack/log/controller
VERBOSE=False
MULTI_HOST=True
HOST_IP=192.168.122.200
### NOVA ###
VNCSERVER_LISTEN=0.0.0.0
#NOVA_VIF_DRIVER=nova.virt.libvirt.vif.NeutronLinuxBridgeVIFDriver
### NEUTRON ###
### ML2 plugin ###
ENABLE_TENANT_TUNNELS=True
Q_ML2_PLUGIN_MECHANISM_DRIVERS=linuxbridge
Q_AGENT=linuxbridge
Q_ML2_PLUGIN_TYPE_DRIVERS=flat,vlan,gre,vxlan
# Prevent L3 agent from using br-ex
PUBLIC_BRIDGE=
# L2 population
Q_ML2_PLUGIN_MECHANISM_DRIVERS=$Q_ML2_PLUGIN_MECHANISM_DRIVERS,l2population
$ cat local.conf
### POST CONFIGURATION ###
[[post-config|$NOVA_CONF]]
[DEFAULT]
vnc_enabled=True
vnc_keymap=fr
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=$HOST_IP
libvirt_inject_key = False
libvirt_inject_partition = -2
[[post-config|/$Q_PLUGIN_CONF_FILE]]
[ml2]
tenant_network_types=vxlan
[vxlan]
enable_vxlan=true
l2_population=True
local_ip=192.168.200.204
[agent]
l2_population=True
[linux_bridge]
tenant_network_type=vxlan
tunnel_type=vxlan
physical_interface_mappings = public:eth0
[ml2_type_flat]
flat_networks = public
[ml2_type_vxlan]
vni_ranges = 1001:2000
# Needed on kernel version under 3.8
#vxlan_group = 224.0.0.1
[[post-config|/$Q_DHCP_CONF_FILE]]
[DEFAULT]
enable_isolated_metadata = True
dnsmasq_dns_server = 8.8.8.8
dnsmasq_config_file = /etc/dnsmasq.conf
[[post-config|/$Q_L3_CONF_FILE]]
[DEFAULT]
external_network_bridge = ""
$ ./stack.sh
...
Using mysql database backend
Installing package prerequisites...done
Installing OpenStack project source...done
Starting RabbitMQ...done
Configuring and starting MySQL...done
Starting Keystone...done
Configuring and starting Horizon...done
Configuring Glance...done
Configuring Neutron...done
Configuring Nova...done
Starting Glance...done
Starting Nova API...done
Starting Neutron...done
Creating initial neutron network elements...done
Starting Nova...done
Uploading images...[\]
Horizon is now available at http://192.168.122.200/
Keystone is serving at http://192.168.122.200:5000/v2.0/
Examples on using novaclient command line is in exercise.sh
The default users are: admin and demo
The password: admin
This is your host ip: 192.168.122.200
done
stack.sh completed in 131 seconds....[/]
$
To create the public provider network (floating IP pool and router gateway):
$ cd devstack; source openrc admin admin
$ neutron net-create --shared public --provider:network_type flat --provider:physical_network public --router:external=True
$ neutron subnet-create public 192.168.122.0/24 --name subnet-internet --gateway 192.168.122.1 --ip-version 4 --allocation-pool start=192.168.122.230,end=192.168.122.245
This devstack configuration will start Neutron with the ML2 plugin and the Linux Bridge agent and use VXLAN tunneling. To do that, LB agent need the Linux kernel VXLAN support (appears in release 3.7). So I need to load that module from the host:
$ sudo modprobe vxlan
Note: when you use the LB agent with VXLAN tunneling, you need to increase the interface MTU used for sending tunneled packets to 50 octets as explained in OpenStack documentation.
You can also use OVS agent (the default devstack agent but I recommend to use LB agent with VXLAN if you have a recent Linux kernel. It’s easier to understand the network visualization path, 1 virtual network = 1 bridge, no flows…). You just need to install OpenvSwitch packages into containers and copy the generating OpenvSwitch kernel module from one of your containers to the host and load it (a kernel module compile by container runs necessary on the host kernel (kernels are shared between the host and the container)) (Note-id).
Note: With that environment, I don’t understand clearly how same kernel module (vxlan and openvswitch) shared between host and containers can creates isolated datapath. Is it due to network namespace?
Note: Perhaps this commit can explain how Open vSwitch datapaths are shared between containers.
Now, playing with that development environment is more efficient and light. You can create compute nodes by cloning LXC template and run devstack with compute node configuration.
TODO
A improvement could be easily added by using Btrfs volume which manages containers RootFS (Snapshot, cloning, save space, rapidity…).
Note-id: I didn’t use the OVS Debian package. I recompiled OVS from the git repository through the deb generator.
Comments
Post a Comment