Backing up KVM Virtual Machines with Duplicity + Backblaze

As part of my home DR strategy, I’ve started pushing images of all my virtual machines (as well as my other data) across to Backblaze using Duplicity.  If you want to do the same, here’s how you can do it.

First up, you will need a GnuPG keypair.  We’re going to be writing encrypted backups.  Store copies of those keys somewhere offsite and safe, since you will absolutely need those to do a restore.

Secondly, you’ll need a Backblaze account.  Get one, then generate an API key.  This will be comprised of an account ID and an application key.  You will then need to create a bucket to store your backups in.  Make the bucket private.

Now that’s done, I’m assuming here that you have your /var/lib/libvirt library where your VMs are stored on its own LV.  If this isn’t the case, make it so.  This is so you can take a LV snapshot of the volume (for consistency) and then replicate that to Backblaze.

#!/bin/bash

# Parameters used by the below, customize this
BUCKET="b2://ACCOUNTID:APPLICATIONKEY@BUCKETNAME"
TARGET="$BUCKET/YOURFOLDERNAME"
GPGKEYID="YOURGPGKEYIDHERE"
LVNAME=YOURLV
VGPATH=/dev/YOURVG

# Some other parameters
SUFFIX=`date +%s`
SNAPNAME=libvirtbackup-$SUFFIX
SOURCE=/mnt/$SNAPNAME

# Prep and create the LV snap
umount $SOURCE > /dev/null 2>&1
lvremove -f $VGPATH/$SNAPNAME > /dev/null 2>&1
lvcreate --size 10G --snapshot --name $SNAPNAME $VGPATH/$LVNAME || exit 1

# Prep and mount the snap
mkdir $SOURCE || exit 1
mount -o ro,nouuid $VGPATH/$SNAPNAME $SOURCE || exit 1

# Replicate via Duplicity
duplicity \
 --full-if-older-than 3M \
 --encrypt-key $GPGKEYID \
 --allow-source-mismatch \
 $SOURCE $TARGET

# Unmount and remove the LV snap
umount $SOURCE
lvremove -f $VGPATH/$SNAPNAME
rmdir $SOURCE

# Configure incremental/full counts
duplicity remove-all-but-n-full 4 $TARGET
duplicity remove-all-inc-of-but-n-full 1 $TARGET

Configure the parameters above to suit your environment.  You can use gpg --list-keys to get the 8-digit hexadecimal key ID of the key you’re going to encrypt with.  The folder name in your bucket you use is arbitrary, but you should only use one folder for one Duplicity target.  The 10G LV snap size can be adjusted to suit your environment, but it must be large enough to hold all changes made while the backup is running.  I picked 10Gb, because that seems OK in my environment.

Obviously this means I need to have 10Gb free in the VG that the libvirt LV lives in.

Retention here will run incrementals each time it’s run, do a full every 3 months, ditch any incrementals for any fulls except the latest one, and keep up to 4 fulls.  With a weekly backup, this will amount to a 12 month recovery window, with a 3-monthly resolution after 3 months, and a weekly resolution less than 3 months.  Tune to suit.  Drop that script in /etc/cron.daily or /etc/cron.weekly to run as required.

Important.  Make sure you can do a restore.  Look at the documentation for duplicity restore for help.

Deploying OpenShift Origin on CentOS Atomic

For my work, we’re looking at OpenShift, and I decided I’d set up an OpenShift Origin setup at home on my KVM box using CentOS Atomic.  This is a really basic setup, involving one master, two nodes, and no NFS persistent volumes (yet!).  We also don’t permit pushing to DockerHub, since this will be a completely private setup.  I won’t go into how to actually setup Atomic instances here.

Refer to the OpenShift Advanced Install manual for more.

Prerequisites

  • One Atomic master (named os-master1 here)
  • Two Atomic nodes (named os-node1 and os-node2 here)
  • A wildcard domain in your DNS (more on this later, it’s named *.os.localdomain here)
  • A hashed password for your admin account (named admin here), you can generate this with htpasswd.
  • A box elsewhere that you can SSH into your Atomic nodes from, without using a password (read about ssh-copy-id if you need to).  We’ll be putting Ansible on this to do the OpenShift installation.

Setting up a Wildcard Domain

Assuming you’re using BIND, you will need the following stanza in your zone;

; Wildcard domain for OpenShift
$ORIGIN os.localdomain.
* IN CNAME os-master1.localdomain.
$ORIGIN localdomain.

Change to suit your domain, of course.  This causes any attempts to resolve anything in .os.localdomain to be pointed as a CNAME to your master.  This is required so you don’t have to keep messing with your DNS setup whenever you deploy a new pod.

Preparing the Installation Host

As discussed, you’ll need a box you can do your installation from.  Let’s install the pre-reqs onto it (namely, Ansible and Git).  I’m assuming you are using CentOS here.

yum install -y epel-release
yum install ansible python-cryptography python-crypto pyOpenSSL git
git clone https://github.com/openshift/openshift-ansible

As the last step, we pull down the OpenShift Origin installer, which we’ll be using shortly to install OpenShift.

You will now require an inventory file for the installer to use.  The following example should be placed in ~/openshift-hosts .

Substitute out the hashed password you generated for your admin account in there.

About the infrastructure

That inventory file will deploy a fully working OpenShift Origin install in one go, with a number of assumptions made.

  • You have one (non-redundant!) master, which runs the router and registry instances.
  • You have two nodes, which are used to deploy other pods.  Each node is in its own availability zone (named left and right here).

More dedicated setups will have multiple masters, which do not run pods, and will likely set up specific nodes to run the infrastructure pods (registries, routers and such).  Since I’m constrained for resources, I haven’t done this, and the master runs the infrastructure pods too.

It’s also very likely that you’ll want a registry that uses NFS.  More on this later.

Installing OpenShift

Once this is done, installation is very simple;

cd openshift-ansible
ansible-playbook -i ../openshift-hosts playbooks/byo/config.yml

Sit back and wait.  This’ll take quite a while.  When it’s done, you can then (hopefully!) go to;

https://os-master1:8443/console/

To log into your OpenShift web interface.  You can ssh to one of your masters and run oc commands from there too.

I’ll run through a simple quickstart to test the infrastructure shortly.

 

Customizing Unit Files in Systemd

On a KVM virtual machine I have, I want to have the MySQL (well, MariaDB) database running on NFS, which means that I need MariaDB to only start up after NFS becomes available.  This would normally require editing the default systemd unit file for MariaDB.  This is a bad idea, since your changes will be reverted every package upgrade.  Here’s how to fix that.

Create a new file in /etc/systemd/system/mariadb.service , containing;

.include /usr/lib/systemd/system/mariadb.service

[Unit]
RequiresMountsFor=/var/lib/mysql

In this case, I want to import all the settings from the original unit file, but then add an additional requirement – requiring /var/lib/mysql to be mounted.

Once this is done, you have to disable and then re-enable that unit.  This causes systemd to redo all of its internal symbolic links to suit your new override file.  If you fail to do this, your override will be ignored.

systemctl disable mariadb.service
systemctl enable mariadb.service

If you now do a status on that unit, you should see something like this;

[root@yourhost ~]# systemctl status mariadb.service
● mariadb.service - MariaDB database server
 Loaded: loaded (/etc/systemd/system/mariadb.service; enabled; vendor preset: disabled)

Note how the unit file is now originating from /etc/systemd/system/mariadb.service ?  That shows the override has taken.  Also;

[root@yourhost ~]# systemctl list-dependencies mariadb.service
mariadb.service
● ├─-.mount
● ├─system.slice
● ├─var-lib-mysql.mount
● └─basic.target

In the list of dependencies, you can see there’s a new dependency – the mount target you specified.  Note that in systemd land, mounts specified in /etc/fstab become targets like everything else.  You can even do a status or list-dependencies on them.

Obviously you can apply this to any changes you want to make to the unit files for any service.  Have fun!

Static MAC Generator for KVM

The following line will generate (pseudo-randomly) a static MAC address, suitable for use with a KVM virtual machine;

date +%s | md5sum | head -c 6 | sed -e 's/\([0-9A-Fa-f]\{2\}\)/\1:/g' -e 's/\(.*\):$/\1/' | sed -e 's/^/52:54:00:/'

Similar nonsense can be done with Hyper-V and VMware.

Easy headless KVM deployment with virt-install

I’d like for the VMs I’m deploying to be entirely headless (that is, no virtual graphics card at all, serial console only).  Turns out that you can do ISO-based installations of CentOS-7, headless, without having to unpack the ISO and mess with stuff.  Enter virt-install, the swiss army knife of KVM O/S installs;

virt-install \
 --name testvm \
 --ram 1024 \
 --disk size=10,bus=scsi,discard='unmap',format='qcow2' \
 --disk size=2,bus=scsi,discard='unmap',format='qcow2' \
 --disk size=2,bus=scsi,discard='unmap',format='qcow2' \
 --disk size=2,bus=scsi,discard='unmap',format='qcow2' \
 --disk size=2,bus=scsi,discard='unmap',format='qcow2' \
 --controller type=scsi,model=virtio-scsi \
 --vcpus 1 \
 --cpu host \
 --os-type linux \
 --os-variant centos7.0 \
 --network bridge=br0 \
 --graphics none \
 --console pty,target_type=serial \
 --location '/tmp/CentOS-7-x86_64-Minimal-1511.iso' \
 --extra-args 'console=ttyS0,115200n8 serial ks=http://192.168.1.10/centos.ks'

The above will deploy a brand-new KVM virtual machine, using the CentOS media it finds in /tmp, using the serial console.  It attaches it to the bridge br0, sets up five disks (all which support TRIM), and kickstarts it from the Kickstart answer file listed.  After install, you can get at the console with virsh console testvm .  And that’s it.

You can use virt-install to install Windows 10 in the same way, but you’ll need to attach the ISO images in a different way, like this;

virt-install \
 --name=windows10 \
 --ram=4096 \
 --cpu=host \
 --vcpus=2 \
 --os-type=windows \
 --os-variant=win8.1 \
 --disk size=40,bus=scsi,discard='unmap',format='qcow2' \
 --disk /tmp/Windows10Pro-x64.iso,device=cdrom,bus=ide \
 --disk /usr/share/virtio-win/virtio-win.iso,device=cdrom,bus=ide \
 --controller type=scsi,model=virtio-scsi \
 --network bridge=br0 \
 --graphics spice,listen=0.0.0.0 \
 --noautoconsole

This will attach the VirtIO driver disk on a second virtual CDROM device.  You’ll need to use the GUI (note how Spice is configured to listen on all interfaces) to load the driver.  This configuration also supports TRIM.

You can also use the –location field to point it directly at a repository for installing stuff like Ubuntu straight from the ‘net, eg;

--location 'http://mirror.aarnet.edu.au/ubuntu/dists/xenial/main/installer-amd64/'

Pretty cool stuff.

TRIM Support on KVM Virtual Machines

Messing with KVM as a replacement for my Microserver setup at home.  With KVM, you can define a thin-provisioned VM image file (a qcow2 file), which is a sparse file on the filesystem.  You can then configure the guest O/S so that TRIM support works, and it can then unmap unused blocks by the guest FS and have those blocks get unmapped the whole way down the stack (released from the sparse file, and ultimately trimmed from the underlying SSD if there is one).

The best bit?  This works for Windows and Linux guests, in the same way.

First up, adjust your machine so that it uses SCSI for the QCOW files you want to enable TRIM support on.  This may require some fudging with Windows (more on this one later).

Then, edit the XML for your VM definition with “virsh edit DOMAINNAME”.  Find the disk definition, and make the changes that are bolded here;

<disk type='file' device='disk'>
 <driver name='qemu' type='qcow2' discard='unmap'/>
 <source file='/var/lib/libvirt/images/example.qcow2'/>
 <backingStore/>
 <target dev='sda' bus='scsi'/>
 <alias name='scsi0-0-0-0'/>
 <address type='drive' controller='0' bus='0' target='0' unit='0'/>
 </disk>

Also make sure that your SCSI controller is of type ‘virtio-scsi’;

<controller type='scsi' index='0' model='virtio-scsi'>
 <alias name='scsi0'/>
 <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
 </controller>

Notably, only SCSI is able to pass the commands necessary to support TRIM properly.  Boot your VM, and you should now be able to run fstrim, or on Windows defrag should show the drive as a thin provisioned volume that can be trimmed (for Windows versions that support TRIM).

Now, for Windows.  Assuming you have already installed all the drivers, the easiest way to ensure that the kernel is loading the SCSI driver when you go and change the type (which otherwise results in a blue screen), temporarily add a second, small disk using the type you’re changing C drive to.  Change C drive and boot, and it should boot fine with that type.  You can then remove the small disk.

Good luck!