NFS Persistent Volumes with OpenShift

Official documentation here.  Following is a (very!) brief summary of how to get your Registry in Openshift working with an NFS backend.  I haven’t been able yet to get it to deploy cleanly straight from the Ansible installer with NFS, but it is pretty straightforward to change it after initial deployment.

NOTE – A lot of this can probably be done in much, much better ways.  This is just how I managed to do it by bumbling around until I got it working.

Creating the NFS Export

First up, you’ll need to provision an NFS export on your NFS server, using the following options;

/srv/registry -rw,async,root_squash,no_wdelay,mp @openshift

Where ‘@openshift’ is the name of a group in /etc/netgroup for all your OpenShift hosts.  I’m also assuming that it a mountpoint, hence ‘mp’.

We then go to that directory and set it to be owned by root, gid is 5555 (example), and 0770 access.

Creating the Persistent Volume

Now first, we need to add that as a persistent volume to OpenShift.  I assume it’ll be 50Gb in size, and you want the data retained if the claim is released.  Create the following file and save as nfs-pv.yml somewhere you can get at it with the oc command.

---
 apiVersion: v1
 kind: PersistentVolume
 metadata:
   name: registry-volume
 spec:
   capacity:
     storage: 50Gi
   accessModes:
     - ReadWriteMany
   nfs:
     path: /srv/registry
     server: nfs.localdomain
   persistentVolumeReclaimPolicy: Retain
...

Right.  Now we change into the default project (where the Registry is located), and add that as a PV;

oc project default
oc create -f nfs-pv.yml
oc get pv

The last command should now show the new PV that you created.  Great.

Creating the Persistent Volume Claim

Now you have the PV, but it’s unclaimed by a project.  Let’s fix that.  Create a new file, nfs-claim.yml where you can get at it.

---
 apiVersion: v1
 kind: PersistentVolumeClaim
 metadata:
   name: registry-storage
 spec:
   accessModes:
     - ReadWriteMany
   resources:
     requests:
       storage: 50Gi
...

Now we can add that claim;

oc project default
oc create -f nfs-claim.yml
oc get pvc

The last command should now show the new PVC that you created.

Changing the Supplemental Group of the Deployment

Right.  Remember we assigned a GID of 5555 to the NFS export?  Now we need to assign that to the Registry deployment.

Unfortunately, I don’t know how to do this with the CLI yet.  So hit the GUI, find the docker-registry deployment, and click Edit YAML under Actions.

In there, scroll down and look for the securityContext tag.  You’ll want to change this as follows;

securityContext:
  supplementalGroups:
  - 5555

This sets the pods deployed with that deployment to have a supplemental group ID of 5555 attached to them.  Now they should get access to the NFS export when we attach it.

Attaching the NFS Storage to the Deployment

Again, I don’t know how to do this in the CLI, sorry.  Click Actions, then Attach Storage, and attach the claim you made.

Once that has finished deploying, you’ll find you have the claim bound to the deployment, but it’s not being used anywhere.  Click Actions, Edit YAML again, and then find the volumes section.  Edit that to;

volumes:
  -
    name: registry-storage
    persistentVolumeClaim:
      claimName: registry-storage

Phew.  Save it, wait for the deployment to be done.  Nearly there!

Testing it out

Now, if you go into Pods, select the pod that’s currently deployed for the Registry, you should be able to click Terminal, and then view the mounts.  You should see your NFS export there, and you should be able to touch files in there and see them on the NFS server.

Good luck!

Deploying a Quickstart Template to Openshift (CLI)

Repeating what we did in my previous post, we’ll deploy the Django example project to OpenShift using the CLI to do it.  This is probably more attractive to many sysadmins.

You can install the client by downloading it from the OpenShift Origin Github repository.  Clients are available for Windows, Linux, and so-on.  I used the Windows client, and I’m running it under Cygwin.

First, log into your OpenShift setup;

oc login --insecure-skip-tls-verify=true https://os-master1.localdomain:8443

We disable TLS verify since our test OpenShift setup doesn’t have proper SSL certificates yet.  Enter the credentials you use to get into OpenShift.

Next up, we’ll create a new project, change into that project, then deploy the test Django example application into it.  Finally, we’ll tail the build logs so we can see how it goes.

oc new-project test-project --display-name="Test Project" --description="Deployed from Command Line"
oc project test-project
oc new-app --template=django-example
oc logs -f bc/django-example

After that finishes, we can review the status of the deployment with oc status;

$ oc status
In project Test Project (test-project) on server https://os-master1.localdomain:8443

http://django-example-test-project.openshift.localdomain (svc/django-example)
 dc/django-example deploys istag/django-example:latest <-
 bc/django-example builds https://github.com/openshift/django-ex.git with openshift/python:3.4
 deployment #1 deployed about a minute ago - 1 pod

View details with 'oc describe <resource>/<name>' or list everything with 'oc get all'.

Ok, looks great.  You can now connect to the URL above and you should see the Django application splash page.

Now that worked, we’ll change back into the default project, display all the projects we have, and then delete that test project;

$ oc project default
Now using project "default" on server "https://os-master1.localdomain:8443".

$ oc get projects
NAME DISPLAY NAME STATUS
default Active
test-project Test Project Active

$ oc delete project test-project
project "test-project" deleted

$

Fantastic.  Setup works!

Deploying a Quickstart Template to Openshift (GUI)

In my last post, I talked about how to set up a quick-and-dirty OpenShift environment on Atomic.  Here, we’ll talk about firing up a test application, just to verify that everything works.

First, log into your OpenShift console, which you can find at (replace hostname);

http://os-master1.localdomain:8443/console

Once in, click the New Project button.  You’ll see something like this;

os-quickstart1

Enter quickstart-project for the name and display name, and click Create.  You’ll now be at the template selection screen, and will be presented with an enormous list of possible templates.

os-quickstart2

Enter “quickstart django” into the list, then click ‘django-example’.  Here is where you would normally customize your template.  Don’t worry about that for now.  Scroll down the bottom

os-quickstart3

You don’t need to change anything, just hit Create.  You now get the following window;

os-quickstart4

Click Continue to overview.  While you can run the oc tool directly from the masters, it’s better practice to not do that, and instead do it from your dev box, wherever that is.

If you’ve been stuffing around like I did, by the time you get to the overview, your build will be done!

os-quickstart5

Click the link directly under SERVICE, named django-example-quickstart-project.YOURDOMAINHERE.  You should now see the Django application splash screen pop up.

If so, congratulations!  You’ve just deployed your first application in OpenShift.

Have a look at the build logs, click the up and down arrows next to the deployment circle and watch what they do.

Deploying OpenShift Origin on CentOS Atomic

For my work, we’re looking at OpenShift, and I decided I’d set up an OpenShift Origin setup at home on my KVM box using CentOS Atomic.  This is a really basic setup, involving one master, two nodes, and no NFS persistent volumes (yet!).  We also don’t permit pushing to DockerHub, since this will be a completely private setup.  I won’t go into how to actually setup Atomic instances here.

Refer to the OpenShift Advanced Install manual for more.

Prerequisites

  • One Atomic master (named os-master1 here)
  • Two Atomic nodes (named os-node1 and os-node2 here)
  • A wildcard domain in your DNS (more on this later, it’s named *.os.localdomain here)
  • A hashed password for your admin account (named admin here), you can generate this with htpasswd.
  • A box elsewhere that you can SSH into your Atomic nodes from, without using a password (read about ssh-copy-id if you need to).  We’ll be putting Ansible on this to do the OpenShift installation.

Setting up a Wildcard Domain

Assuming you’re using BIND, you will need the following stanza in your zone;

; Wildcard domain for OpenShift
$ORIGIN os.localdomain.
* IN CNAME os-master1.localdomain.
$ORIGIN localdomain.

Change to suit your domain, of course.  This causes any attempts to resolve anything in .os.localdomain to be pointed as a CNAME to your master.  This is required so you don’t have to keep messing with your DNS setup whenever you deploy a new pod.

Preparing the Installation Host

As discussed, you’ll need a box you can do your installation from.  Let’s install the pre-reqs onto it (namely, Ansible and Git).  I’m assuming you are using CentOS here.

yum install -y epel-release
yum install ansible python-cryptography python-crypto pyOpenSSL git
git clone https://github.com/openshift/openshift-ansible

As the last step, we pull down the OpenShift Origin installer, which we’ll be using shortly to install OpenShift.

You will now require an inventory file for the installer to use.  The following example should be placed in ~/openshift-hosts .

Substitute out the hashed password you generated for your admin account in there.

About the infrastructure

That inventory file will deploy a fully working OpenShift Origin install in one go, with a number of assumptions made.

  • You have one (non-redundant!) master, which runs the router and registry instances.
  • You have two nodes, which are used to deploy other pods.  Each node is in its own availability zone (named left and right here).

More dedicated setups will have multiple masters, which do not run pods, and will likely set up specific nodes to run the infrastructure pods (registries, routers and such).  Since I’m constrained for resources, I haven’t done this, and the master runs the infrastructure pods too.

It’s also very likely that you’ll want a registry that uses NFS.  More on this later.

Installing OpenShift

Once this is done, installation is very simple;

cd openshift-ansible
ansible-playbook -i ../openshift-hosts playbooks/byo/config.yml

Sit back and wait.  This’ll take quite a while.  When it’s done, you can then (hopefully!) go to;

https://os-master1:8443/console/

To log into your OpenShift web interface.  You can ssh to one of your masters and run oc commands from there too.

I’ll run through a simple quickstart to test the infrastructure shortly.