Adding an RTC to your Raspberry Pi

I use a RPi 3 as a secondary DNS and DHCP server, and time synchronization is important for that.  Due to some technicalities with how my network is set up, this means that I need a real-time clock on the RPi so that it can have at least some idea of the correct time when it powers up instead of being absolutely dependant on NTP for that.

Enter the DS3231 RTC (available on eBay for a few bucks).  The Pi Hut has an excellent tutorial on setting this up for a RPi, which I’m going to summarize here.

Configure I2C on the RPi

From a root shell (I’m assuming you’re using Raspbian like me);

apt-get install python-smbus
 apt-get install i2c-tools

Then, edit your /boot/config.txt and add the following down the bottom;

dtparam=i2c_arm=on
dtoverlay=i2c-rtc,ds3231

Edit your /etc/modules and add the following line;

i2c-dev

Now reboot.  If you do an i2cdetect -y 1 you should see the DS3231 listed as device 0x68.  If you do, great.

Configure Raspbian to use the RTC

After rebooting, the new device should be up, but you won’t be using it yet.  Remove the fake hardware clock with;

apt-get --purge remove fake-hwclock

Now you should be able do hwclock -r to read the clock, and then hwclock-w to write the current time to it.

And lastly, to make it pull time from the RTC on boot, put the following into /etc/rc.local before the exit 0;

hwclock -s

And you can then add a cronjob in /etc/cron.weekly to run hwclock -w once a week.

Done!

Backing up KVM Virtual Machines with Duplicity + Backblaze

As part of my home DR strategy, I’ve started pushing images of all my virtual machines (as well as my other data) across to Backblaze using Duplicity.  If you want to do the same, here’s how you can do it.

First up, you will need a GnuPG keypair.  We’re going to be writing encrypted backups.  Store copies of those keys somewhere offsite and safe, since you will absolutely need those to do a restore.

Secondly, you’ll need a Backblaze account.  Get one, then generate an API key.  This will be comprised of an account ID and an application key.  You will then need to create a bucket to store your backups in.  Make the bucket private.

Now that’s done, I’m assuming here that you have your /var/lib/libvirt library where your VMs are stored on its own LV.  If this isn’t the case, make it so.  This is so you can take a LV snapshot of the volume (for consistency) and then replicate that to Backblaze.

#!/bin/bash

# Parameters used by the below, customize this
BUCKET="b2://ACCOUNTID:APPLICATIONKEY@BUCKETNAME"
TARGET="$BUCKET/YOURFOLDERNAME"
GPGKEYID="YOURGPGKEYIDHERE"
LVNAME=YOURLV
VGPATH=/dev/YOURVG

# Some other parameters
SUFFIX=`date +%s`
SNAPNAME=libvirtbackup-$SUFFIX
SOURCE=/mnt/$SNAPNAME

# Prep and create the LV snap
umount $SOURCE > /dev/null 2>&1
lvremove -f $VGPATH/$SNAPNAME > /dev/null 2>&1
lvcreate --size 10G --snapshot --name $SNAPNAME $VGPATH/$LVNAME || exit 1

# Prep and mount the snap
mkdir $SOURCE || exit 1
mount -o ro,nouuid $VGPATH/$SNAPNAME $SOURCE || exit 1

# Replicate via Duplicity
duplicity \
 --full-if-older-than 3M \
 --encrypt-key $GPGKEYID \
 --allow-source-mismatch \
 $SOURCE $TARGET

# Unmount and remove the LV snap
umount $SOURCE
lvremove -f $VGPATH/$SNAPNAME
rmdir $SOURCE

# Configure incremental/full counts
duplicity remove-all-but-n-full 4 $TARGET
duplicity remove-all-inc-of-but-n-full 1 $TARGET

Configure the parameters above to suit your environment.  You can use gpg --list-keys to get the 8-digit hexadecimal key ID of the key you’re going to encrypt with.  The folder name in your bucket you use is arbitrary, but you should only use one folder for one Duplicity target.  The 10G LV snap size can be adjusted to suit your environment, but it must be large enough to hold all changes made while the backup is running.  I picked 10Gb, because that seems OK in my environment.

Obviously this means I need to have 10Gb free in the VG that the libvirt LV lives in.

Retention here will run incrementals each time it’s run, do a full every 3 months, ditch any incrementals for any fulls except the latest one, and keep up to 4 fulls.  With a weekly backup, this will amount to a 12 month recovery window, with a 3-monthly resolution after 3 months, and a weekly resolution less than 3 months.  Tune to suit.  Drop that script in /etc/cron.daily or /etc/cron.weekly to run as required.

Important.  Make sure you can do a restore.  Look at the documentation for duplicity restore for help.

Splunkd High CPU after leap second addition?

Had my alerting system yell at me about high CPU load on my Splunk Free VM;

A bit of examination revealed that it was indeed at abnormally high load average (around 10), although there didn’t appear to be anything wrong.  Then a quick look at dmesg dropped the penny;

Jan 1 10:29:59 splunk kernel: Clock: inserting leap second 23:59:60 UTC

Err.  The high CPU load average started at 10:30am, right when the leap second was added.

A restart of all the services resolved the issue.  Load average is back down to its normal levels.

Bash on Windows – X Server!

It turns out that you can use Bash on Windows 10 to run X applications, including through ssh tunnels.  Here’s how.

First, go and install XMing.  I’d strongly suggest not allowing it to get access to your network, so it stays on localhost.  This is so that an attacker can’t draw stuff on your screen through your X server.

Run XMing, put it in your startup if you want.  You now have an X server.  Next up, you’ll need to fire up Bash on Windows, and run sudo apt-get install xauth.  Then edit your ~/.bashrc .  Right down the bottom, add the following;

export DISPLAY=localhost:0
xauth generate $DISPLAY

This causes your session to be configured so that you can use X applications and they’ll be pointed to your X server.  It also provides the correct X authentication tokens to make things like ssh work.

Now, log out and back in again.  Start up Bash for Windows, then you can run stuff.

Bash on Ubuntu on Windows 10 – Teething Issues

Just set up the new Bash on Windows 10 feature that comes with the Anniversary Update.  It’s not bad.  But there’s a few annoying things it does that grind my gears.

The default umask is 0000

Yeah.  That’s what I said.  This means all files you create from the Bash shell are read/write/execute to EVERYBODY.  Not smart.  SSH hates that.

echo "umask 0022" >> /etc/profile

To fix that one.

Sudo doesn’t inherit root’s HOME

This causes many commands (pip for example) to dump files into your user directory as root, resulting in an inability to modify files in your own homedir.  Not great.

Add the following in your /etc/sudoers somewhere;

Defaults always_set_home

More as I come across it.

ELK Stack in Docker with NGINX

I’ve done a bit of work in the past few days modifying a Docker ELK Github repository I came across, to make it more suited to my needs.

You can find my efforts at my Github repository.  This setup, when brought up with docker-compose up, will put together a full ELK stack composed of the latest versions of ElasticSearch, Logstash, Kibana, all fronted by NGINX with login required.

The setup persistently stores all Elasticsearch data into the ./esdata directory, and accepts syslog input on port 42185 along with JSON input on port 5000.

In order to access Elasticsearch, use the Sense plugin in Kibana.  You can get at Kibana on port 5601, with a default login of admin/admin.  You can change that by using htpasswd and creating a new user file at ./nginx/htpasswd.users .

A couple of things about Docker in this setup.  When you link containers, it’s not necessary to expose ports between the containers.  Exposing is only required to make a port accessible from outside Docker.  When containers are linked, they get access to all ports on the linked container.

This means that it’s not required to specifically expose all the internal ports of the stack – you only have to expose the entry/exit points you want on the stack as a unit.  In this case, that’s the entry ports to Logstash and the entry point in nginx.

Also, if you use a version 2 docker-compose specification, Docker Compose will also create an isolated network bridge just for your application, which is great here.  It will also manage dependencies appropriately to make sure the stack comes up in the right order.

Oh yeah.  If you bring up the stack with docker-compose up, press Ctrl+\ to break out of it without taking the stack down.

Magic!

NFS Persistent Volumes with OpenShift

Official documentation here.  Following is a (very!) brief summary of how to get your Registry in Openshift working with an NFS backend.  I haven’t been able yet to get it to deploy cleanly straight from the Ansible installer with NFS, but it is pretty straightforward to change it after initial deployment.

NOTE – A lot of this can probably be done in much, much better ways.  This is just how I managed to do it by bumbling around until I got it working.

Creating the NFS Export

First up, you’ll need to provision an NFS export on your NFS server, using the following options;

/srv/registry -rw,async,root_squash,no_wdelay,mp @openshift

Where ‘@openshift’ is the name of a group in /etc/netgroup for all your OpenShift hosts.  I’m also assuming that it a mountpoint, hence ‘mp’.

We then go to that directory and set it to be owned by root, gid is 5555 (example), and 0770 access.

Creating the Persistent Volume

Now first, we need to add that as a persistent volume to OpenShift.  I assume it’ll be 50Gb in size, and you want the data retained if the claim is released.  Create the following file and save as nfs-pv.yml somewhere you can get at it with the oc command.

---
 apiVersion: v1
 kind: PersistentVolume
 metadata:
   name: registry-volume
 spec:
   capacity:
     storage: 50Gi
   accessModes:
     - ReadWriteMany
   nfs:
     path: /srv/registry
     server: nfs.localdomain
   persistentVolumeReclaimPolicy: Retain
...

Right.  Now we change into the default project (where the Registry is located), and add that as a PV;

oc project default
oc create -f nfs-pv.yml
oc get pv

The last command should now show the new PV that you created.  Great.

Creating the Persistent Volume Claim

Now you have the PV, but it’s unclaimed by a project.  Let’s fix that.  Create a new file, nfs-claim.yml where you can get at it.

---
 apiVersion: v1
 kind: PersistentVolumeClaim
 metadata:
   name: registry-storage
 spec:
   accessModes:
     - ReadWriteMany
   resources:
     requests:
       storage: 50Gi
...

Now we can add that claim;

oc project default
oc create -f nfs-claim.yml
oc get pvc

The last command should now show the new PVC that you created.

Changing the Supplemental Group of the Deployment

Right.  Remember we assigned a GID of 5555 to the NFS export?  Now we need to assign that to the Registry deployment.

Unfortunately, I don’t know how to do this with the CLI yet.  So hit the GUI, find the docker-registry deployment, and click Edit YAML under Actions.

In there, scroll down and look for the securityContext tag.  You’ll want to change this as follows;

securityContext:
  supplementalGroups:
  - 5555

This sets the pods deployed with that deployment to have a supplemental group ID of 5555 attached to them.  Now they should get access to the NFS export when we attach it.

Attaching the NFS Storage to the Deployment

Again, I don’t know how to do this in the CLI, sorry.  Click Actions, then Attach Storage, and attach the claim you made.

Once that has finished deploying, you’ll find you have the claim bound to the deployment, but it’s not being used anywhere.  Click Actions, Edit YAML again, and then find the volumes section.  Edit that to;

volumes:
  -
    name: registry-storage
    persistentVolumeClaim:
      claimName: registry-storage

Phew.  Save it, wait for the deployment to be done.  Nearly there!

Testing it out

Now, if you go into Pods, select the pod that’s currently deployed for the Registry, you should be able to click Terminal, and then view the mounts.  You should see your NFS export there, and you should be able to touch files in there and see them on the NFS server.

Good luck!

Deploying a Quickstart Template to Openshift (CLI)

Repeating what we did in my previous post, we’ll deploy the Django example project to OpenShift using the CLI to do it.  This is probably more attractive to many sysadmins.

You can install the client by downloading it from the OpenShift Origin Github repository.  Clients are available for Windows, Linux, and so-on.  I used the Windows client, and I’m running it under Cygwin.

First, log into your OpenShift setup;

oc login --insecure-skip-tls-verify=true https://os-master1.localdomain:8443

We disable TLS verify since our test OpenShift setup doesn’t have proper SSL certificates yet.  Enter the credentials you use to get into OpenShift.

Next up, we’ll create a new project, change into that project, then deploy the test Django example application into it.  Finally, we’ll tail the build logs so we can see how it goes.

oc new-project test-project --display-name="Test Project" --description="Deployed from Command Line"
oc project test-project
oc new-app --template=django-example
oc logs -f bc/django-example

After that finishes, we can review the status of the deployment with oc status;

$ oc status
In project Test Project (test-project) on server https://os-master1.localdomain:8443

http://django-example-test-project.openshift.localdomain (svc/django-example)
 dc/django-example deploys istag/django-example:latest <-
 bc/django-example builds https://github.com/openshift/django-ex.git with openshift/python:3.4
 deployment #1 deployed about a minute ago - 1 pod

View details with 'oc describe <resource>/<name>' or list everything with 'oc get all'.

Ok, looks great.  You can now connect to the URL above and you should see the Django application splash page.

Now that worked, we’ll change back into the default project, display all the projects we have, and then delete that test project;

$ oc project default
Now using project "default" on server "https://os-master1.localdomain:8443".

$ oc get projects
NAME DISPLAY NAME STATUS
default Active
test-project Test Project Active

$ oc delete project test-project
project "test-project" deleted

$

Fantastic.  Setup works!

Deploying a Quickstart Template to Openshift (GUI)

In my last post, I talked about how to set up a quick-and-dirty OpenShift environment on Atomic.  Here, we’ll talk about firing up a test application, just to verify that everything works.

First, log into your OpenShift console, which you can find at (replace hostname);

http://os-master1.localdomain:8443/console

Once in, click the New Project button.  You’ll see something like this;

os-quickstart1

Enter quickstart-project for the name and display name, and click Create.  You’ll now be at the template selection screen, and will be presented with an enormous list of possible templates.

os-quickstart2

Enter “quickstart django” into the list, then click ‘django-example’.  Here is where you would normally customize your template.  Don’t worry about that for now.  Scroll down the bottom

os-quickstart3

You don’t need to change anything, just hit Create.  You now get the following window;

os-quickstart4

Click Continue to overview.  While you can run the oc tool directly from the masters, it’s better practice to not do that, and instead do it from your dev box, wherever that is.

If you’ve been stuffing around like I did, by the time you get to the overview, your build will be done!

os-quickstart5

Click the link directly under SERVICE, named django-example-quickstart-project.YOURDOMAINHERE.  You should now see the Django application splash screen pop up.

If so, congratulations!  You’ve just deployed your first application in OpenShift.

Have a look at the build logs, click the up and down arrows next to the deployment circle and watch what they do.

Deploying OpenShift Origin on CentOS Atomic

For my work, we’re looking at OpenShift, and I decided I’d set up an OpenShift Origin setup at home on my KVM box using CentOS Atomic.  This is a really basic setup, involving one master, two nodes, and no NFS persistent volumes (yet!).  We also don’t permit pushing to DockerHub, since this will be a completely private setup.  I won’t go into how to actually setup Atomic instances here.

Refer to the OpenShift Advanced Install manual for more.

Prerequisites

  • One Atomic master (named os-master1 here)
  • Two Atomic nodes (named os-node1 and os-node2 here)
  • A wildcard domain in your DNS (more on this later, it’s named *.os.localdomain here)
  • A hashed password for your admin account (named admin here), you can generate this with htpasswd.
  • A box elsewhere that you can SSH into your Atomic nodes from, without using a password (read about ssh-copy-id if you need to).  We’ll be putting Ansible on this to do the OpenShift installation.

Setting up a Wildcard Domain

Assuming you’re using BIND, you will need the following stanza in your zone;

; Wildcard domain for OpenShift
$ORIGIN os.localdomain.
* IN CNAME os-master1.localdomain.
$ORIGIN localdomain.

Change to suit your domain, of course.  This causes any attempts to resolve anything in .os.localdomain to be pointed as a CNAME to your master.  This is required so you don’t have to keep messing with your DNS setup whenever you deploy a new pod.

Preparing the Installation Host

As discussed, you’ll need a box you can do your installation from.  Let’s install the pre-reqs onto it (namely, Ansible and Git).  I’m assuming you are using CentOS here.

yum install -y epel-release
yum install ansible python-cryptography python-crypto pyOpenSSL git
git clone https://github.com/openshift/openshift-ansible

As the last step, we pull down the OpenShift Origin installer, which we’ll be using shortly to install OpenShift.

You will now require an inventory file for the installer to use.  The following example should be placed in ~/openshift-hosts .

Substitute out the hashed password you generated for your admin account in there.

About the infrastructure

That inventory file will deploy a fully working OpenShift Origin install in one go, with a number of assumptions made.

  • You have one (non-redundant!) master, which runs the router and registry instances.
  • You have two nodes, which are used to deploy other pods.  Each node is in its own availability zone (named left and right here).

More dedicated setups will have multiple masters, which do not run pods, and will likely set up specific nodes to run the infrastructure pods (registries, routers and such).  Since I’m constrained for resources, I haven’t done this, and the master runs the infrastructure pods too.

It’s also very likely that you’ll want a registry that uses NFS.  More on this later.

Installing OpenShift

Once this is done, installation is very simple;

cd openshift-ansible
ansible-playbook -i ../openshift-hosts playbooks/byo/config.yml

Sit back and wait.  This’ll take quite a while.  When it’s done, you can then (hopefully!) go to;

https://os-master1:8443/console/

To log into your OpenShift web interface.  You can ssh to one of your masters and run oc commands from there too.

I’ll run through a simple quickstart to test the infrastructure shortly.