Skip to main content

Running Openstack into Linux Containers (LXC on Debian)

The devstack project is an interesting tool to deploy a complete OpenStack development environment from source code. I’ve been using it from one year in my development activities on Neutron. For that, I set up an isolated VM (libvirt or VirtalBox through Vagrant), but I found that environment isn’t very efficient (too slow and uses lot of memory). Furthermore, when you developing network stack, you need to set up more than one node (three is a good number to test network virtualization). This is why I try to set up that environment into containers (LXC).

Setup LXC environment

I work on Debian testing release (Jessie) which uses the kernel 3.11 release. devstack supports Debian distribution thanks to the Emilien Macchi’s commit. I use LXC experimental Debian package to be able to use the new template ‘lxc-debconfig’ which is more efficient when deploying a Debian container and; it can use preseed config file to set up the container:
$ sudo 'deb http://ftp.debian.org/debian experimental main' > /etc/apt/sources.list.d/experimental.list
$ apt-get update
$ apt-get -t experimental install -y lxc lxc-stuff
You also need to mount cgroup every time you boot by adding line into /etc/fstab:
$ sudo echo 'cgroup  /sys/fs/cgroup  cgroup  defaults  0   0' >> /etc/fstab
$ sudo mount /sys/fs/cgroup
Then check if LXC is correctly installed and supported by your kernel:
$ sudo lxc-checkconfig
Kernel configuration not found at /proc/config.gz; searching...
Kernel configuration found at /boot/config-3.11-2-amd64
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: missing
Network namespace: enabled
Multiple /dev/pts instances: enabled

--- Control groups ---
Cgroup: enabled
Cgroup clone_children flag: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled

--- Misc ---
Veth pair device: enabled
Macvlan: enabled
Vlan: enabled
File capabilities: enabled

Note : Before booting a new kernel, you can check its configuration
usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig
NoteDebian testing kernel (3.11) does not yet support CONFIG_USER_NS kernel option.
Then, you can create your first RootFS with Debian template:
$ sudo lxc-create -n container-template -t debian
You can specify a preseed file to configure the installation. I choose the ‘Debian GNU/Linux “jessie”’ distribution and answer the questions according to convenience. By default, ‘debconfig’ template ask for a bridge name to plug the first container interface. If you have libvirt installed on your host, by default a NATed network is created. So you can provide the created default network bridge (named ‘virbr0’ by default) to have Internet network access into the container.
Now, the container is ready. I changed some configurations into that container to be able to mount host folder where I put all OpenStack cloned source and to be able to run libvirt, create tun interface, access some devices (/dev/mem, /dev/kvm, /dev/net/tun) and create network namespaces into that container (this LXC config still have some problems with OpenStack, especially with Cinder and tgt set up which I don’t use):
$ sudo cat container-template/config
# Template used to create this container: /usr/share/lxc/templates/lxc-debian
# Parameters passed to the template:
lxc.network.type = empty
lxc.rootfs = /var/lib/lxc/container-template/rootfs
# /var/lib/lxc/container-template/config

## Container
lxc.utsname = container-template
lxc.rootfs = /var/lib/lxc/container-template/rootfs
lxc.arch = x86_64
#lxc.console = /var/log/lxc/container-template.console
lxc.tty = 6
lxc.pts = 1024

## Capabilities
lxc.cap.drop = audit_control
lxc.cap.drop = audit_write
lxc.cap.drop = linux_immutable
lxc.cap.drop = mac_admin
lxc.cap.drop = mac_override
# To be able libvirt start and identify qemu version
#lxc.cap.drop = setpcap
# To enable to add netns
#lxc.cap.drop = sys_admin
lxc.cap.drop = sys_module
lxc.cap.drop = sys_pacct
# To access /dev/mem
#lxc.cap.drop = sys_rawio
lxc.cap.drop = sys_time
## Devices
lxc.kmsg = 0
lxc.autodev = 0
# Allow all devices
#lxc.cgroup.devices.allow = a
# Deny all devices
lxc.cgroup.devices.deny = a
# Allow to mknod all devices (but not using them)
lxc.cgroup.devices.allow = c *:* m
lxc.cgroup.devices.allow = b *:* m

# /dev/mem
lxc.cgroup.devices.allow = c 1:1 rwm
# dev/input/mice
lxc.cgroup.devices.allow = c 13:63 rwm
# /dev/console
lxc.cgroup.devices.allow = c 5:1 rwm
# /dev/full
lxc.cgroup.devices.allow = c 1:7 rwm
# /dev/fuse
lxc.cgroup.devices.allow = c 10:229 rwm
# /dev/hpet
#lxc.cgroup.devices.allow = c 10:228 rwm
# /dev/kvm
lxc.cgroup.devices.allow = c 10:232 rwm
# /dev/loop*
#lxc.cgroup.devices.allow = b 7:* rwm
# /dev/loop-control
#lxc.cgroup.devices.allow = c 10:237 rwm
# /dev/null
lxc.cgroup.devices.allow = c 1:3 rwm
# /dev/ptmx
lxc.cgroup.devices.allow = c 5:2 rwm
# /dev/pts/*
lxc.cgroup.devices.allow = c 136:* rwm
# /dev/random
lxc.cgroup.devices.allow = c 1:8 rwm
# /dev/rtc
lxc.cgroup.devices.allow = c 254:0 rm
# /dev/tty
lxc.cgroup.devices.allow = c 5:0 rwm
# /dev/urandom
lxc.cgroup.devices.allow = c 1:9 rwm
# /dev/zero
lxc.cgroup.devices.allow = c 1:5 rwm
# /dev/net/tun
lxc.cgroup.devices.allow = c 10:200 rwm

## Limits
#lxc.cgroup.cpu.shares = 1024
#lxc.cgroup.cpuset.cpus = 0
#lxc.cgroup.memory.limit_in_bytes = 256M
#lxc.cgroup.memory.memsw.limit_in_bytes = 1G

## Filesystem
lxc.mount.entry = proc proc proc rw,nodev,noexec,nosuid 0 0
lxc.mount.entry = sysfs sys sysfs rw 0 0
#lxc.mount.entry = /srv/container-template /var/lib/lxc/container-template/rootfs/srv/container-template none defaults,bind 0 0
lxc.mount.entry = /home/doude/OpenStack opt/stack none defaults,bind 0 0

## Public network (NATed to internet)
lxc.network.type = veth
lxc.network.flags = up
lxc.network.hwaddr = 00:FF:2E:BE:57:F7
lxc.network.link = virbr0
lxc.network.name = eth0

# Private network (isolated)
lxc.network.type = veth
lxc.network.flags = up
lxc.network.hwaddr = 00:EE:2E:BE:57:F7
lxc.network.link = virbr1
lxc.network.name = eth1
NoteI configure the container to mount proc and sysfs in read write mode whereas usually LXC mounts them in read only mode. I can afford to do that because I’m in developing and testing environment.
I also add a new network interface which I plug to an isolated network (created with libvirt). I will use that network to transport Neutron tunneling networks between nodes (controller and compute). The NATed network will be my provider network (external network for router’s gateway and floating IP pool).
You also can see the mounting entry which will bind a shared folder from the host to all my containers. I plan to place all OpenStack git code repository in that folder. So, all my OpenStack nodes will share efficiently the same source code base where I can develop from the host with my favorite IDE.
Before starting the container, I create the mounting destination folder into container’s RootFS:
$ sudo mkdir container-template/rootfs/opt/stack
$ sudo chown 1000:1000 container-template/rootfs/opt/stack
If you doesn’t provide root password or user (during the container installation or through a preseed file), you can easily chrooted into the RootFS of the container and set the root password:
$ sudo chroot container-template/rootfs/
root@container:/# passwd
...
We also need to create some device:
root@container:/# mkdir /dev/net 
root@container:/# mknod /dev/net/tun c 10 200 
root@container:/# chmod 666 /dev/net/tun
root@container:/# exit
$ 
Then, I can start the template container:
$ sudo lxc-start -n container-template -d
And access to the console with command:
$ sudo lxc-console -n container-template
You can leave the console with ctrl+a q.
Now, I can log with root or user account and create a user if needed (devstack needs one). Then I configure all my post personal configurations.
NoteI recommend to set a local package cache on your host (like apt-cacher) and set it as apt proxy for your containers and host to save some Internet bandwidth.

Create the devstack controller container

When my container template is ready, I can clone it to create the devstack controller node:
$ sudo lxc-clone lxc-template controller
Created container controller as copy of lxc-template
I disable DHCP IP auto configuration onto that new container to set a static IP associated to the name ‘controller’.
$ cat /etc/network/interfaces 
# Used by ifup(8) and ifdown(8). See the interfaces(5) manpage or
# /usr/share/doc/ifupdown/examples for more information.

auto lo
iface lo inet loopback

# Public network
auto eth0
#iface eth0 inet dhcp
iface eth0 inet static
  address 192.168.122.200/24
  gateway 192.168.122.1
  dns-nameservers 192.168.122.1

# Private network
auto eth1
iface eth1 inet static
    address 192.168.200.200/24
Now, I can install and start devstack controller node:
$ git clone https://github.com/openstack-dev/devstack.git
$ cd devstack
$ cat localrc
disable_service n-net
disable_service n-obj
disable_service tempest
disable_service cinder
disable_service c-api
disable_service c-vol
disable_service c-sch
#disable_service n-novnc
enable_service horizon
disable_service n-xvnc
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service neutron
#enable_service n-spice
enable_service n-vnc

ADMIN_PASSWORD=admin
MYSQL_PASSWORD=password
RABBIT_PASSWORD=guest
SERVICE_PASSWORD=password
SERVICE_TOKEN=tokentoken

ROOTSLEEP=0
RECLONE=True
#RECLONE=False
#OFFLINE=True
DATA_DIR=$TOP_DIR/data
SCREEN_LOGDIR=/opt/stack/log/controller
VERBOSE=False

MULTI_HOST=True
HOST_IP=192.168.122.200

### NOVA ###
VNCSERVER_LISTEN=0.0.0.0
#NOVA_VIF_DRIVER=nova.virt.libvirt.vif.NeutronLinuxBridgeVIFDriver

### NEUTRON ###
### ML2 plugin ###
ENABLE_TENANT_TUNNELS=True
Q_ML2_PLUGIN_MECHANISM_DRIVERS=linuxbridge
Q_AGENT=linuxbridge

Q_ML2_PLUGIN_TYPE_DRIVERS=flat,vlan,gre,vxlan

# Prevent L3 agent from using br-ex
PUBLIC_BRIDGE=

# L2 population
Q_ML2_PLUGIN_MECHANISM_DRIVERS=$Q_ML2_PLUGIN_MECHANISM_DRIVERS,l2population

$ cat local.conf
### POST CONFIGURATION ###
[[post-config|$NOVA_CONF]]
[DEFAULT]
vnc_enabled=True
vnc_keymap=fr
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=$HOST_IP
libvirt_inject_key = False
libvirt_inject_partition = -2

[[post-config|/$Q_PLUGIN_CONF_FILE]]
[ml2]
tenant_network_types=vxlan

[vxlan]
enable_vxlan=true
l2_population=True
local_ip=192.168.200.204

[agent]
l2_population=True

[linux_bridge]
tenant_network_type=vxlan
tunnel_type=vxlan
physical_interface_mappings = public:eth0

[ml2_type_flat]
flat_networks = public

[ml2_type_vxlan]
vni_ranges = 1001:2000
# Needed on kernel version under 3.8
#vxlan_group = 224.0.0.1

[[post-config|/$Q_DHCP_CONF_FILE]]
[DEFAULT]
enable_isolated_metadata = True
dnsmasq_dns_server = 8.8.8.8
dnsmasq_config_file = /etc/dnsmasq.conf

[[post-config|/$Q_L3_CONF_FILE]]
[DEFAULT]
external_network_bridge = ""
    
$ ./stack.sh
...
Using mysql database backend
Installing package prerequisites...done
Installing OpenStack project source...done
Starting RabbitMQ...done
Configuring and starting MySQL...done
Starting Keystone...done
Configuring and starting Horizon...done
Configuring Glance...done
Configuring Neutron...done
Configuring Nova...done
Starting Glance...done
Starting Nova API...done
Starting Neutron...done
Creating initial neutron network elements...done
Starting Nova...done
Uploading images...[\]


Horizon is now available at http://192.168.122.200/
Keystone is serving at http://192.168.122.200:5000/v2.0/
Examples on using novaclient command line is in exercise.sh
The default users are: admin and demo
The password: admin
This is your host ip: 192.168.122.200
done
stack.sh completed in 131 seconds....[/]
$
To create the public provider network (floating IP pool and router gateway):
$ cd devstack; source openrc admin admin
$ neutron net-create --shared public --provider:network_type flat --provider:physical_network public --router:external=True
$ neutron subnet-create public 192.168.122.0/24 --name subnet-internet --gateway 192.168.122.1 --ip-version 4 --allocation-pool start=192.168.122.230,end=192.168.122.245
This devstack configuration will start Neutron with the ML2 plugin and the Linux Bridge agent and use VXLAN tunneling. To do that, LB agent need the Linux kernel VXLAN support (appears in release 3.7). So I need to load that module from the host:
$ sudo modprobe vxlan
Notewhen you use the LB agent with VXLAN tunneling, you need to increase the interface MTU used for sending tunneled packets to 50 octets as explained in OpenStack documentation.
You can also use OVS agent (the default devstack agent but I recommend to use LB agent with VXLAN if you have a recent Linux kernel. It’s easier to understand the network visualization path, 1 virtual network = 1 bridge, no flows…). You just need to install OpenvSwitch packages into containers and copy the generating OpenvSwitch kernel module from one of your containers to the host and load it (a kernel module compile by container runs necessary on the host kernel (kernels are shared between the host and the container)) (Note-id).
NoteWith that environment, I don’t understand clearly how same kernel module (vxlan and openvswitch) shared between host and containers can creates isolated datapath. Is it due to network namespace?
NotePerhaps this commit can explain how Open vSwitch datapaths are shared between containers.
Now, playing with that development environment is more efficient and light. You can create compute nodes by cloning LXC template and run devstack with compute node configuration.

TODO

A improvement could be easily added by using Btrfs volume which manages containers RootFS (Snapshot, cloning, save space, rapidity…).
Note-id: I didn’t use the OVS Debian package. I recompiled OVS from the git repository through the deb generator.

Comments

Popular posts from this blog

What Why How SDN..???????

What is SDN?   If you follow any number of news feeds or vendor accounts on Twitter, you've no doubt noticed the term "software-defined networking" or SDN popping up more and more lately. Depending on whom you believe, SDN is either the most important industry revolution since Ethernet or merely the latest marketing buzzword (the truth, of course, probably falls somewhere in between). Few people from either camp, however, take the time to explain what SDN actually means. This is chiefly because the term is so new and different parties have been stretching it to encompass varying definitions which serve their own agendas. The phrase "software-defined networking" only became popular over roughly the past eighteen months or so. So what the hell is it? Before we can appreciate the concept of SDN, we must first examine how current networks function. Each of the many processes of a router or switch can be assigned to one of three conceptual planes of operatio...

NETWORKING BASICS

This article is referred from windowsnetworking.com In this article series, I will start with the absolute basics, and work toward building a functional network. In this article I will begin by discussing some of the various networking components and what they do. If you would like to read the other parts in this article series please go to: Networking Basics: Part 2 - Routers Networking Basics: Part 3 - DNS Servers Networking Basics: Part 4 - Workstations and Servers Networking Basics: Part 5 - Domain Controllers Networking Basics: Part 6 - Windows Domain Networking Basics: Part 7 - Introduction to FSMO Roles Networking Basics: Part 8 - FSMO Roles continued Networking Basics: Part 9 – Active Directory Information Networking Basics: Part 10 - Distinguished Names Networking Basics, Part 11: The Active Directory Users and Computers Console Networking Basics: Part 12 - User Account Management Networking Basics: Part 13 - Creating ...

How to install and setup Docker on RHEL 7/CentOS 7

H ow do I install and setup Docker container on an RHEL 7 (Red Hat Enterprise Linux) server? How can I setup Docker on a CentOS 7? How to install and use Docker CE on a CentOS Linux 7 server? Docker is free and open-source software. It automates the deployment of any application as a lightweight, portable, self-sufficient container that will run virtually anywhere. Typically you develop software on your laptop/desktop. You can build a container with your app, and it can test run on your computer. It will scale in cloud, VM, VPS, bare-metal and more. There are two versions of docker. The first one bundled with RHEL/CentOS 7 distro and can be installed with the yum. The second version distributed by the Docker project called docker-ce (community free version) and can be installed by the official Docker project repo. The third version distributed by the Docker project called docker-ee (Enterprise paid version) and can be installed by the official Docker project repo.  This page shows...