Deploying a Quickstart Template to Openshift (CLI)

Repeating what we did in my previous post, we’ll deploy the Django example project to OpenShift using the CLI to do it.  This is probably more attractive to many sysadmins.

You can install the client by downloading it from the OpenShift Origin Github repository.  Clients are available for Windows, Linux, and so-on.  I used the Windows client, and I’m running it under Cygwin.

First, log into your OpenShift setup;

oc login --insecure-skip-tls-verify=true https://os-master1.localdomain:8443

We disable TLS verify since our test OpenShift setup doesn’t have proper SSL certificates yet.  Enter the credentials you use to get into OpenShift.

Next up, we’ll create a new project, change into that project, then deploy the test Django example application into it.  Finally, we’ll tail the build logs so we can see how it goes.

oc new-project test-project --display-name="Test Project" --description="Deployed from Command Line"
oc project test-project
oc new-app --template=django-example
oc logs -f bc/django-example

After that finishes, we can review the status of the deployment with oc status;

$ oc status
In project Test Project (test-project) on server https://os-master1.localdomain:8443

http://django-example-test-project.openshift.localdomain (svc/django-example)
 dc/django-example deploys istag/django-example:latest <-
 bc/django-example builds with openshift/python:3.4
 deployment #1 deployed about a minute ago - 1 pod

View details with 'oc describe <resource>/<name>' or list everything with 'oc get all'.

Ok, looks great.  You can now connect to the URL above and you should see the Django application splash page.

Now that worked, we’ll change back into the default project, display all the projects we have, and then delete that test project;

$ oc project default
Now using project "default" on server "https://os-master1.localdomain:8443".

$ oc get projects
default Active
test-project Test Project Active

$ oc delete project test-project
project "test-project" deleted


Fantastic.  Setup works!

Deploying a Quickstart Template to Openshift (GUI)

In my last post, I talked about how to set up a quick-and-dirty OpenShift environment on Atomic.  Here, we’ll talk about firing up a test application, just to verify that everything works.

First, log into your OpenShift console, which you can find at (replace hostname);


Once in, click the New Project button.  You’ll see something like this;


Enter quickstart-project for the name and display name, and click Create.  You’ll now be at the template selection screen, and will be presented with an enormous list of possible templates.


Enter “quickstart django” into the list, then click ‘django-example’.  Here is where you would normally customize your template.  Don’t worry about that for now.  Scroll down the bottom


You don’t need to change anything, just hit Create.  You now get the following window;


Click Continue to overview.  While you can run the oc tool directly from the masters, it’s better practice to not do that, and instead do it from your dev box, wherever that is.

If you’ve been stuffing around like I did, by the time you get to the overview, your build will be done!


Click the link directly under SERVICE, named django-example-quickstart-project.YOURDOMAINHERE.  You should now see the Django application splash screen pop up.

If so, congratulations!  You’ve just deployed your first application in OpenShift.

Have a look at the build logs, click the up and down arrows next to the deployment circle and watch what they do.

Deploying OpenShift Origin on CentOS Atomic

For my work, we’re looking at OpenShift, and I decided I’d set up an OpenShift Origin setup at home on my KVM box using CentOS Atomic.  This is a really basic setup, involving one master, two nodes, and no NFS persistent volumes (yet!).  We also don’t permit pushing to DockerHub, since this will be a completely private setup.  I won’t go into how to actually setup Atomic instances here.

Refer to the OpenShift Advanced Install manual for more.


  • One Atomic master (named os-master1 here)
  • Two Atomic nodes (named os-node1 and os-node2 here)
  • A wildcard domain in your DNS (more on this later, it’s named *.os.localdomain here)
  • A hashed password for your admin account (named admin here), you can generate this with htpasswd.
  • A box elsewhere that you can SSH into your Atomic nodes from, without using a password (read about ssh-copy-id if you need to).  We’ll be putting Ansible on this to do the OpenShift installation.

Setting up a Wildcard Domain

Assuming you’re using BIND, you will need the following stanza in your zone;

; Wildcard domain for OpenShift
$ORIGIN os.localdomain.
* IN CNAME os-master1.localdomain.
$ORIGIN localdomain.

Change to suit your domain, of course.  This causes any attempts to resolve anything in .os.localdomain to be pointed as a CNAME to your master.  This is required so you don’t have to keep messing with your DNS setup whenever you deploy a new pod.

Preparing the Installation Host

As discussed, you’ll need a box you can do your installation from.  Let’s install the pre-reqs onto it (namely, Ansible and Git).  I’m assuming you are using CentOS here.

yum install -y epel-release
yum install ansible python-cryptography python-crypto pyOpenSSL git
git clone

As the last step, we pull down the OpenShift Origin installer, which we’ll be using shortly to install OpenShift.

You will now require an inventory file for the installer to use.  The following example should be placed in ~/openshift-hosts .

Substitute out the hashed password you generated for your admin account in there.

About the infrastructure

That inventory file will deploy a fully working OpenShift Origin install in one go, with a number of assumptions made.

  • You have one (non-redundant!) master, which runs the router and registry instances.
  • You have two nodes, which are used to deploy other pods.  Each node is in its own availability zone (named left and right here).

More dedicated setups will have multiple masters, which do not run pods, and will likely set up specific nodes to run the infrastructure pods (registries, routers and such).  Since I’m constrained for resources, I haven’t done this, and the master runs the infrastructure pods too.

It’s also very likely that you’ll want a registry that uses NFS.  More on this later.

Installing OpenShift

Once this is done, installation is very simple;

cd openshift-ansible
ansible-playbook -i ../openshift-hosts playbooks/byo/config.yml

Sit back and wait.  This’ll take quite a while.  When it’s done, you can then (hopefully!) go to;


To log into your OpenShift web interface.  You can ssh to one of your masters and run oc commands from there too.

I’ll run through a simple quickstart to test the infrastructure shortly.


Vagrant Quickstart on Ubuntu Xenial 16.04 with Libvirt

There’s a few issues with running Vagrant with Libvirt on Ubuntu 16.04 .  Namely, the bundled version of Vagrant is broken.  Whoops!

Here’s how you can get it running using the upstream Vagrant (currently 1.8.4), get a basic libvirt running, and bring up a VM just to prove that it works (we’ll use openSUSE because they provide a box that works with libvirt).

Install the libvirt essentials

sudo apt-get update
sudo apt-get install ubuntu-virt-server ubuntu-virt-mgmt virt-manager libvirt-dev
sudo adduser YOURUSERNAME libvirtd

Fetch and install upstream Vagrant

DANGER:  Don’t run any vagrant plugins with sudo, it will probably trash permissions on your ~/.vagrant.d/ directory and go badly for you.

sudo apt purge vagrant
sudo apt autoremove
sudo dpkg -i vagrant_1.8.4_x86_64.deb
sudo apt-get install -f
vagrant plugin install vagrant-libvirt

Bring up a test VM

Showtime!  Bring up a test VM and connect to it with ssh…

mkdir testvm
cd testvm
vagrant init opensuse/openSUSE-42.1-x86_64
vagrant up --provider libvirt
vagrant ssh

Get rid of the test machine

vagrant destroy

Phew.  Next up, making your own box.

Customizing Unit Files in Systemd

On a KVM virtual machine I have, I want to have the MySQL (well, MariaDB) database running on NFS, which means that I need MariaDB to only start up after NFS becomes available.  This would normally require editing the default systemd unit file for MariaDB.  This is a bad idea, since your changes will be reverted every package upgrade.  Here’s how to fix that.

Create a new file in /etc/systemd/system/mariadb.service , containing;

.include /usr/lib/systemd/system/mariadb.service


In this case, I want to import all the settings from the original unit file, but then add an additional requirement – requiring /var/lib/mysql to be mounted.

Once this is done, you have to disable and then re-enable that unit.  This causes systemd to redo all of its internal symbolic links to suit your new override file.  If you fail to do this, your override will be ignored.

systemctl disable mariadb.service
systemctl enable mariadb.service

If you now do a status on that unit, you should see something like this;

[root@yourhost ~]# systemctl status mariadb.service
● mariadb.service - MariaDB database server
 Loaded: loaded (/etc/systemd/system/mariadb.service; enabled; vendor preset: disabled)

Note how the unit file is now originating from /etc/systemd/system/mariadb.service ?  That shows the override has taken.  Also;

[root@yourhost ~]# systemctl list-dependencies mariadb.service
● ├─-.mount
● ├─system.slice
● ├─var-lib-mysql.mount
● └─

In the list of dependencies, you can see there’s a new dependency – the mount target you specified.  Note that in systemd land, mounts specified in /etc/fstab become targets like everything else.  You can even do a status or list-dependencies on them.

Obviously you can apply this to any changes you want to make to the unit files for any service.  Have fun!

Static MAC Generator for KVM

The following line will generate (pseudo-randomly) a static MAC address, suitable for use with a KVM virtual machine;

date +%s | md5sum | head -c 6 | sed -e 's/\([0-9A-Fa-f]\{2\}\)/\1:/g' -e 's/\(.*\):$/\1/' | sed -e 's/^/52:54:00:/'

Similar nonsense can be done with Hyper-V and VMware.

If you’re using MythTV 0.28 on Ubuntu 16.04 …

… you’ll want to know about this bug.  Put the following string in the end of your /etc/mysql/conf.d/mythtv.cnf ;


You may also want to try;



Easy headless KVM deployment with virt-install

I’d like for the VMs I’m deploying to be entirely headless (that is, no virtual graphics card at all, serial console only).  Turns out that you can do ISO-based installations of CentOS-7, headless, without having to unpack the ISO and mess with stuff.  Enter virt-install, the swiss army knife of KVM O/S installs;

virt-install \
 --name testvm \
 --ram 1024 \
 --disk size=10,bus=scsi,discard='unmap',format='qcow2' \
 --disk size=2,bus=scsi,discard='unmap',format='qcow2' \
 --disk size=2,bus=scsi,discard='unmap',format='qcow2' \
 --disk size=2,bus=scsi,discard='unmap',format='qcow2' \
 --disk size=2,bus=scsi,discard='unmap',format='qcow2' \
 --controller type=scsi,model=virtio-scsi \
 --vcpus 1 \
 --cpu host \
 --os-type linux \
 --os-variant centos7.0 \
 --network bridge=br0 \
 --graphics none \
 --console pty,target_type=serial \
 --location '/tmp/CentOS-7-x86_64-Minimal-1511.iso' \
 --extra-args 'console=ttyS0,115200n8 serial ks='

The above will deploy a brand-new KVM virtual machine, using the CentOS media it finds in /tmp, using the serial console.  It attaches it to the bridge br0, sets up five disks (all which support TRIM), and kickstarts it from the Kickstart answer file listed.  After install, you can get at the console with virsh console testvm .  And that’s it.

You can use virt-install to install Windows 10 in the same way, but you’ll need to attach the ISO images in a different way, like this;

virt-install \
 --name=windows10 \
 --ram=4096 \
 --cpu=host \
 --vcpus=2 \
 --os-type=windows \
 --os-variant=win8.1 \
 --disk size=40,bus=scsi,discard='unmap',format='qcow2' \
 --disk /tmp/Windows10Pro-x64.iso,device=cdrom,bus=ide \
 --disk /usr/share/virtio-win/virtio-win.iso,device=cdrom,bus=ide \
 --controller type=scsi,model=virtio-scsi \
 --network bridge=br0 \
 --graphics spice,listen= \

This will attach the VirtIO driver disk on a second virtual CDROM device.  You’ll need to use the GUI (note how Spice is configured to listen on all interfaces) to load the driver.  This configuration also supports TRIM.

You can also use the –location field to point it directly at a repository for installing stuff like Ubuntu straight from the ‘net, eg;

--location ''

Pretty cool stuff.

Raspian with Ralink 7601 Wifi Adapter

Recently picked up a Ralink 7601 Wifi Adapter (a no-name clone wifi stub from Ebay), for the princely sum of about $2 delivered.  It’s identifable easily because in lsusb it shows up as;

Bus 001 Device 005: ID 148f:7601 Ralink Technology, Corp.

Unfortunately, it turns out these things aren’t natively supported by Raspian without a firmware module.  But there’s hope!

This guide shows how to get it running, which essentially just boils down to this command;

wget -O /lib/firmware/mt7601u.bin

And then configuring it like you normally would in wpa_supplicant.  Pretty easy stuff in the end.

Raspberry Pi Temperature Monitoring with CheckMK

The Raspberry Pi running Raspian has some built-in temperature sensors.  The sensor is on the CPU die, and you can find it at;


CheckMK supports the idea of local checks.  A local check is a simple script that runs in the agent on a host and performs whatever check processing and verification that’s required on the client end.  This means you cannot customize the warn/crit thresholds from the CheckMK host end.  But they’re easy to write.

The above simplistic script reads in the CPU temperature of the RPi, and sets a warn threshold of 90% of the throttling temperature with a critical threshold of 100% of the throttling temperature.

If you add this into;


On your Raspian install, then manually run check_mk_agent, you’ll see in the <<<local>>> section the output from the sensor.  You can then edit the host in CheckMK and add the new service that is automatically inventoried.  I assume here that your CPU die never gets below 0 degrees (should be fairly sensible in most circumstances, I imagine).