Deploying OpenStack Mitaka with Kolla, Docker, and Ansible

It’s true, Mitaka is not quite released yet. That said, these instructions haven’t changed since Liberty and will stay relevant once Mitaka is officially tagged.

The requirements and steps to build Kolla images are provided at docs.openstack.org. Those have already been done and the Docker images exist in my private registry.

A bit about my environment before we begin.

3 identical custom servers with the following specs:

These servers are interconnected at 10Gb using a Netgear XS708E switch. I have one 10Gb interface (eth3) dedicated to VM traffic for Neutron. The other is in a bond (bond0) with one of my 1Gb nics for HA.

I will be deploying ceph, haproxy, keepalived, rabbitmq, mariadb w/ galera, and memcached along side the other OpenStack services with Kolla. To start, we need to do some prep work to the physical disks for Kolla to pick up the disks in the ceph bootstrap process. This would also be the same procedure needed to add new disks in the future.

The disks I will be using are /dev/sde, /dev/sdf, /dev/sdg with external journals on my pcie ssd located at /etc/nvme0n1. I will also be setting up an OSD for using as a cache tier with ceph on the ssd as well.

In order for the bootstrap process to tie the appropriate devices together we use GPT partition names to do this. For /dev/sde I create an fresh partition table with a new partition labeled KOLLA_CEPH_OSD_BOOTSTRAP_1. This explicit naming process is so Kolla never, ever messes with a disk it shouldn’t be.

# parted /dev/sde -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_1 1 -1
root@ubuntu1:~# parted /dev/sde print
Model: ATA ST4000DM000-1F21 (scsi)
Disk /dev/sde: 4001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number Start End Size File system Name Flags
1 1049kB 4001GB 4001GB btrfs KOLLA_CEPH_OSD_BOOTSTRAP_1

The same method is used for /dev/sdf and /dev/sdg, but with labels KOLLA_CEPH_OSD_BOOTSTRAP_2 and KOLLA_CEPH_OSD_BOOTSTRAP_3 respectively. Now we have to setup the external journals for each of those OSDs (you can co-locate the journals as well by using the label KOLLA_CEPH_OSD_BOOTSTRAP).

The external journal labels are simply the bootstrap label with ‘_J’ appended. For example, the journal for /dev/sde would be KOLLA_CEPH_OSD_BOOTSTRAP_1_J. Once those labels are in place, the Kolla bootstrap process with happily setup ceph on those disks. If you mess up any of the labels all that will happen is the Kolla bootstrap won’t pick up on those disks and you can rerun the playbooks after correcting the issue.

The final look of my disks with the cache tier osd and journals is as follows:

Model: ATA ST4000DM000-1F21 (scsi)
Disk /dev/sde: 4001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags: 

Number Start End Size File system Name Flags
 1 1049kB 4001GB 4001GB btrfs KOLLA_CEPH_OSD_BOOTSTRAP_1

Model: ATA ST4000DM000-1F21 (scsi)
Disk /dev/sdf: 4001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags: 

Number Start End Size File system Name Flags
 1 1049kB 4001GB 4001GB btrfs KOLLA_CEPH_OSD_BOOTSTRAP_2

Model: ATA ST4000DM000-1F21 (scsi)
Disk /dev/sdg: 4001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags: 

Number Start End Size File system Name Flags
 1 1049kB 4001GB 4001GB btrfs KOLLA_CEPH_OSD_BOOTSTRAP_3

Model: Unknown (unknown)
Disk /dev/nvme0n1: 400GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number Start End Size File system Name Flags
 1 1049kB 100GB 100GB btrfs docker
 2 100GB 330GB 230GB KOLLA_CEPH_OSD_CACHE_BOOTSTRAP_1
 3 330GB 340GB 9999MB KOLLA_CEPH_OSD_CACHE_BOOTSTRAP_1_J
 4 340GB 350GB 10.0GB KOLLA_CEPH_OSD_BOOTSTRAP_1_J
 5 350GB 360GB 10.0GB KOLLA_CEPH_OSD_BOOTSTRAP_2_J
 6 360GB 370GB 9999MB KOLLA_CEPH_OSD_BOOTSTRAP_3_J
 7 370GB 380GB 10.0GB KOLLA_CEPH_OSD_BOOTSTRAP_4_J
 8 380GB 390GB 10.0GB KOLLA_CEPH_OSD_BOOTSTRAP_5_J
 9 390GB 400GB 10.1GB KOLLA_CEPH_OSD_BOOTSTRAP_6_J

Next up is configuring my inventory. Normally, you won’t need to configure more than the first 4 sections of your inventory if you have copied it from ansible/inventory/multinode. And that is all I have changed in this case as well. My inventory for my three hosts, ubuntu1 ubuntu2 and ubuntu3, is as follows:

# /etc/kolla/inventory
[control]
ubuntu[1:3]

[network]
ubuntu[1:3]

[compute]
ubuntu[1:3]

[storage]
ubuntu[1:3]

...snip...

Once that was finished I modified my globals.yml for my environment. The final result is below, all options that I configured (with comment sections removed for brevity).

---
config_strategy: "COPY_ALWAYS"
kolla_base_distro: "ubuntu"
kolla_install_type: "source"
kolla_internal_vip_address: "192.0.2.10"
kolla_internal_fqdn: "openstack-int.example.com"
kolla_external_vip_address: "203.0.113.5"
kolla_external_fqdn: "openstack.example.com"
kolla_external_vip_interface: "bond0.10"
kolla_enable_tls_external: "yes"
kolla_external_fqdn_cert: "/etc/kolla/haproxy.pem"
docker_registry: "registry.example.com:8182"
network_interface: "bond0.10"
tunnel_interface: "bond0.200"
neutron_external_interface: "eth3"
openstack_logging_debug: "True"
enable_ceph: "yes"
enable_cinder: "yes"
ceph_enable_cache: "yes"
enable_ceph_rgw: "yes"
ceph_osd_filesystem: "btrfs"
ceph_osd_mount_options: "defaults,compress=lzo,noatime"
ceph_cinder_pool_name: "cinder"
ceph_cinder_backup_pool_name: "cinder-backup"
ceph_glance_pool_name: "glance"
ceph_nova_pool_name: "nova"

And finally, the /etc/kolla/passwords.yml file. This contains, you guessed it, passwords. At the time of this writing it has very bad defaults of “password” as the password. By the time of the Mitaka release this patch will have merged and you be able to run kolla-genpwd to populate this file for you with the random passwords and uuids.

Once all of that was completed I run the pull playbooks to fetch all of the proper images to the proper hosts with the following command:

# time kolla-ansible -i /etc/kolla/inventory pull
Pulling Docker images : ansible-playbook -i /etc/kolla/inventory -e @/etc/kolla/globals.yml -e @/etc/kolla/passwords.yml -e action=pull /root/kolla/ansible/site.yml 

PLAY [ceph-mon;ceph-osd;ceph-rgw] ********************************************* 

GATHERING FACTS *************************************************************** 
ok: [ubuntu1]
ok: [ubuntu2]
ok: [ubuntu3]

TASK: [common | Pulling kolla-toolbox image] ********************************** 
changed: [ubuntu1]
changed: [ubuntu3]
changed: [ubuntu2]
...snip...

PLAY RECAP ******************************************************************** 
ubuntu1 : ok=55 changed=36 unreachable=0 failed=0 
ubuntu2 : ok=55 changed=36 unreachable=0 failed=0 
ubuntu3 : ok=55 changed=36 unreachable=0 failed=0 

real 5m2.662s
user 0m8.068s
sys 0m2.780s

After the images were pulled I ran the actual OpenStack deployment where the magic happens. After this point it was all automated (including all the ceph cache tier and galera clustering) and I didn’t have to touch a thing!

# time ~/kolla/tools/kolla-ansible -i /etc/kolla/inventory deploy
Deploying Playbooks : ansible-playbook -i /etc/kolla/inventory -e @/etc/kolla/globals.yml -e @/etc/kolla/passwords.yml -e action=deploy /root/kolla/ansible/site.yml 

PLAY [ceph-mon;ceph-osd;ceph-rgw] ********************************************* 

GATHERING FACTS *************************************************************** 
ok: [ubuntu1]
ok: [ubuntu3]
ok: [ubuntu2]

TASK: [common | Ensuring config directories exist] **************************** 
changed: [ubuntu1] => (item=heka)
changed: [ubuntu2] => (item=heka)
changed: [ubuntu3] => (item=heka)
changed: [ubuntu1] => (item=cron)
changed: [ubuntu2] => (item=cron)
changed: [ubuntu3] => (item=cron)
changed: [ubuntu1] => (item=cron/logrotate)
changed: [ubuntu2] => (item=cron/logrotate)
changed: [ubuntu3] => (item=cron/logrotate)
...snip...

PLAY RECAP ******************************************************************** 
ubuntu1 : ok=344 changed=146 unreachable=0 failed=0 
ubuntu2 : ok=341 changed=144 unreachable=0 failed=0 
ubuntu3 : ok=341 changed=143 unreachable=0 failed=0 

real 7m32.476s
user 0m48.436s
sys 0m9.584s

And thats it! OpenStack is deployed and good to go. In my case, I could access horizon at the openstack.example.com with full ssl setup thanks to the haproxy.pem I supplied. With a 7 and a half minute run-time, it is hard to beat the speed of this deployment tool.

Bonus: Docker running containers on ubuntu1 host

# docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
b3a0400b2502 registry.example.com:8182/kollaglue/ubuntu-source-horizon:2.0.0 "kolla_start" 8 minutes ago Up 8 minutes horizon
bded510db134 registry.example.com:8182/kollaglue/ubuntu-source-heat-engine:2.0.0 "kolla_start" 8 minutes ago Up 8 minutes heat_engine
388f4c6c1cd3 registry.example.com:8182/kollaglue/ubuntu-source-heat-api-cfn:2.0.0 "kolla_start" 8 minutes ago Up 8 minutes heat_api_cfn
6d73a6aba1e5 registry.example.com:8182/kollaglue/ubuntu-source-heat-api:2.0.0 "kolla_start" 8 minutes ago Up 8 minutes heat_api
8648565cdc50 registry.example.com:8182/kollaglue/ubuntu-source-cinder-backup:2.0.0 "kolla_start" 8 minutes ago Up 8 minutes cinder_backup
73cb05710d46 registry.example.com:8182/kollaglue/ubuntu-source-cinder-volume:2.0.0 "kolla_start" 8 minutes ago Up 8 minutes cinder_volume
92c4b7890bb7 registry.example.com:8182/kollaglue/ubuntu-source-cinder-scheduler:2.0.0 "kolla_start" 8 minutes ago Up 8 minutes cinder_scheduler
fec67e07a216 registry.example.com:8182/kollaglue/ubuntu-source-cinder-api:2.0.0 "kolla_start" 8 minutes ago Up 8 minutes cinder_api
d22abb2f75fb registry.example.com:8182/kollaglue/ubuntu-source-neutron-metadata-agent:2.0.0 "kolla_start" 9 minutes ago Up 9 minutes neutron_metadata_agent
12cd372d0804 registry.example.com:8182/kollaglue/ubuntu-source-neutron-l3-agent:2.0.0 "kolla_start" 9 minutes ago Up 9 minutes neutron_l3_agent
6ada0dd5eff6 registry.example.com:8182/kollaglue/ubuntu-source-neutron-dhcp-agent:2.0.0 "kolla_start" 9 minutes ago Up 9 minutes neutron_dhcp_agent
cd89ac90384a registry.example.com:8182/kollaglue/ubuntu-source-neutron-openvswitch-agent:2.0.0 "kolla_start" 9 minutes ago Up 9 minutes neutron_openvswitch_agent
4eac98222be5 registry.example.com:8182/kollaglue/ubuntu-source-neutron-server:2.0.0 "kolla_start" 9 minutes ago Up 9 minutes neutron_server
1f44c676f39d registry.example.com:8182/kollaglue/ubuntu-source-openvswitch-vswitchd:2.0.0 "kolla_start" 9 minutes ago Up 9 minutes openvswitch_vswitchd
609adb430b0f registry.example.com:8182/kollaglue/ubuntu-source-openvswitch-db-server:2.0.0 "kolla_start" 9 minutes ago Up 9 minutes openvswitch_db
96881dbecf8a registry.example.com:8182/kollaglue/ubuntu-source-nova-compute:2.0.0 "kolla_start" 9 minutes ago Up 9 minutes nova_compute
9c3d58d59f3d registry.example.com:8182/kollaglue/ubuntu-source-nova-libvirt:2.0.0 "kolla_start" 9 minutes ago Up 9 minutes nova_libvirt
ab09c12c0d4d registry.example.com:8182/kollaglue/ubuntu-source-nova-conductor:2.0.0 "kolla_start" 10 minutes ago Up 10 minutes nova_conductor
0d381b7f3757 registry.example.com:8182/kollaglue/ubuntu-source-nova-scheduler:2.0.0 "kolla_start" 10 minutes ago Up 10 minutes nova_scheduler
58bc728e30ef registry.example.com:8182/kollaglue/ubuntu-source-nova-novncproxy:2.0.0 "kolla_start" 10 minutes ago Up 10 minutes nova_novncproxy
c49c7703bbf0 registry.example.com:8182/kollaglue/ubuntu-source-nova-consoleauth:2.0.0 "kolla_start" 10 minutes ago Up 10 minutes nova_consoleauth
799b7da9fac3 registry.example.com:8182/kollaglue/ubuntu-source-nova-api:2.0.0 "kolla_start" 10 minutes ago Up 10 minutes nova_api
fd367be42634 registry.example.com:8182/kollaglue/ubuntu-source-glance-api:2.0.0 "kolla_start" 10 minutes ago Up 10 minutes glance_api
34c69911d5bc registry.example.com:8182/kollaglue/ubuntu-source-glance-registry:2.0.0 "kolla_start" 10 minutes ago Up 10 minutes glance_registry
6adc4580aab3 registry.example.com:8182/kollaglue/ubuntu-source-keystone:2.0.0 "kolla_start" 11 minutes ago Up 11 minutes keystone
38e57a6b8405 registry.example.com:8182/kollaglue/ubuntu-source-rabbitmq:2.0.0 "kolla_start" 11 minutes ago Up 11 minutes rabbitmq
4e5662f74414 registry.example.com:8182/kollaglue/ubuntu-source-mariadb:2.0.0 "kolla_start" 12 minutes ago Up 12 minutes mariadb
52d766774cab registry.example.com:8182/kollaglue/ubuntu-source-memcached:2.0.0 "kolla_start" 13 minutes ago Up 13 minutes memcached
02c793ecff9f registry.example.com:8182/kollaglue/ubuntu-source-keepalived:2.0.0 "kolla_start" 13 minutes ago Up 13 minutes keepalived
feaeb72eaca5 registry.example.com:8182/kollaglue/ubuntu-source-haproxy:2.0.0 "kolla_start" 13 minutes ago Up 13 minutes haproxy
806c4d9f9db8 registry.example.com:8182/kollaglue/ubuntu-source-ceph-rgw:2.0.0 "kolla_start" 13 minutes ago Up 13 minutes ceph_rgw
fe1ddb781fef registry.example.com:8182/kollaglue/ubuntu-source-ceph-osd:2.0.0 "kolla_start" 13 minutes ago Up 13 minutes ceph_osd_9
02d64b83b197 registry.example.com:8182/kollaglue/ubuntu-source-ceph-osd:2.0.0 "kolla_start" 13 minutes ago Up 13 minutes ceph_osd_7
82d705e92421 registry.example.com:8182/kollaglue/ubuntu-source-ceph-osd:2.0.0 "kolla_start" 13 minutes ago Up 13 minutes ceph_osd_5
dea36b30c249 registry.example.com:8182/kollaglue/ubuntu-source-ceph-osd:2.0.0 "kolla_start" 13 minutes ago Up 13 minutes ceph_osd_1
c7c65ad2f377 registry.example.com:8182/kollaglue/ubuntu-source-ceph-mon:2.0.0 "kolla_start" 15 minutes ago Up 15 minutes ceph_mon
407bcb0a393f registry.example.com:8182/kollaglue/ubuntu-source-cron:2.0.0 "kolla_start" 15 minutes ago Up 15 minutes cron
b696b905ac23 registry.example.com:8182/kollaglue/ubuntu-source-kolla-toolbox:2.0.0 "/bin/sleep infinity" 15 minutes ago Up 15 minutes kolla_toolbox
ceca142fb3be registry.example.com:8182/kollaglue/ubuntu-source-heka:2.0.0 "kolla_start" 15 minutes ago Up 15 minutes heka

13 thoughts on “Deploying OpenStack Mitaka with Kolla, Docker, and Ansible

    • I do not. I plan on a follow up article showing bare metal setup (though I realize that is a bit backwards). I can tell you my build time was only 7 minutes, however I have an apt-caching service in place to dramatically speed up those builds.

  1. How does your internal and external vip map to actual networks in your setup? A word about that is helpful, even if you give examples of how someone else should approach that.

    • The internal and external vips are unused ips on their respective networks. In this case that network interface was shared.

      So on bond0.10 I could reach networks 203.0.113.0/24 and had a vip of 203.0.113.5 which was an unused ip

      Also on bond0.10 I could reach networks 192.0.2.0/24 and had a vip of 192.0.2.10 which was an unused ip

      In a non-test setup you would likely have your internal and external vips on separate interfaces. but the same logic applies.

  2. Once all requistes are passed, start the installation of OpenStack by Kolla. The first time usually take a long time, because docker images need to be pulled into target hosts, and more if pull comes from DockerHub registry instead of a local one.

    • In this case the images were on a local repo, but not on the machines. Pulling the images from the local repo was included in the time above.

      Pulling images from DockerHub can take longer, but this is also dependent on your internet connection.

  3. Thanks for this blog post. It’s helping me set up Ceph. I have a question about the path you use for the external journals on your SSD. You suggest to use the path /dev/sde with the label KOLLA_CEPH_OSD_BOOTSTRAP_1_J for an external journal. This is setting the journal on the /dev/sde disk (along with the data), and not on the SSD at /dev/nvme0n1. Sould you instead be setting the journals for each OSD on /dev/nvme0n1? This is where I’m getting confused. How is the path to the SSD set for the journals?

    • The wording isn’t the best I agree, but what I said was “the journal for /dev/sde would be KOLLA_CEPH_OSD_BOOTSTRAP_1_J”. What I mean by this is the partition you create named KOLLA_CEPH_OSD_BOOTSTRAP_1_J would be used as the journal for /dev/sde

      So you would need to do something like:
      parted /dev/sde -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_1 1 -1
      parted /dev/nvme0n1 -s -- mkpart KOLLA_CEPH_OSD_BOOTSTRAP_1_J start_MB end_MB

      • Sam, thank you. That’s perfect. That’s more as I expected. I will give it a go.

  4. Thanks for the blog. It was very helpful.
    For a larger multi-node deployment, does all hosts need to have the external interface (eth3 in this example) or only the network hosts?

    Which one of the network deployments scenarios does Kolla use http://docs.openstack.org/mitaka/networking-guide/deploy.html

    In my multi-node deployment I see only one router namespace which means that only one of the network hosts uses the external interface. Right?

    • So the external interface (provider interface) is not a requirement on all nodes, only the nodes with access to the external-to-openstack networks (typically the internet).

      However, you do need it if you plan on booting instances directly on the exernal network. Additionally, alot of guides setup vlan interfaces on the same interface as the external network (with the external network simply being a vlan segment) so you would also need that eth3 interface on the computes if you have that setup.

      To reduce complication, I typically have this interface available on the compute nodes in guides, but this is not a strict requirement in every environment.

      Kolla does not setup the networking for you, so you can use any type of networking you wish. DVR, L3-HA, and “legacy” routers were all tested by myself when I still worked with the project.

      Your router namespace will only exist on the node that has the L3 router. This may be multiple nodes if using DVR or L3-HA.

Leave a Reply

Your email address will not be published. Required fields are marked *