Netflow with ELK Stack and OpenWRT

Now we’re getting into some pretty serious magic.  This post will outline how to put together OpenWRT and ELK Stack to collect network utilization statistics with Netflow.  From there, we can use Kibana to generate visualizations of traffic data and flows and whatever else you want to leverage with the power of Elasticsearch.

I’m using a virtualized router instance running OpenWRT 15.05.1 (Chaos Calmer) on KVM with the Generic x86 build.  Using a hardware router is still doable, but you’ll need to be careful about CPU utilization of the Netflow exporter.  Setting this up will require a number of components, which we’ll go through now.

You will need an OpenWRT box of some description, and an ELK Stack already configured and running.

OpenWRT Setup

You’ll need to install softflowd, which is as easy as;

opkg update
opkg install softflowd

Then edit /etc/config/softflowd and set the destination for flows to go to something like;

option host_port 'netflow.localdomain:9995'

Start up the Softflow exporter with /etc/init.d/softflowd start and it should be working.

Note, default config will be using Netflow version 5.  Let that stand for now.  Also, leave the default interface on br-lan – that way it’ll catch flows for all traffic reaching the router.

Logstash Configuration

If you’re using the ELK Stack Docker project like me, you’ll need to set up the Docker container to also listen on port 9995 UDP.  At any rate, you need to edit your logstash.conf so that you have the following input receiver;

# Netflow receiver
input {
  udp {
    port => 9995
    type => netflow
    codec => netflow
  }
}

This is an extremely simple receiver which takes in Netflow data on port 9995, sets the type to netflow and then processes it with the built-in Netflow codec.

In your output transmitter, you’ll then want something like this example;

output {
        if ( [type] == "netflow" ) {
                elasticsearch {
                        hosts => "elasticsearch:9200"
                        index => "logstash-netflow-%{host}-%{+YYYY.MM.dd}"
                }
        } else {
                elasticsearch {
                        hosts => "elasticsearch:9200"
                        index => "logstash-%{type}-%{+YYYY.MM.dd}"
                }
        }
}

What this does is pretty straightforward.  Everything gets sent to the Elasticsearch engine at elasticsearch:9200.  But, messages with the type of netflow get pushed into an index that has the IP address that the flow was collected from in it (this will probably be your router).

Restart Logstash and you should start getting flows in within a few minutes.

Kibana Setup

From there, just go into Kibana and add a new index pattern for logstash-netflow-*.  You can then visualize / search all your Netflow data to your heart’s content.

Nice!

Customizing OwnCloud using Docker

I’m messing around with OwnCloud at the moment, a solution to provide cloud-like access to files and folders through a webapp using your own local storage.  As is my want, I’m doing it in Docker.

There’s a minor catch though – the official OwnCloud Docker image does not include smbclient, which is required to provide access to Samba shares.

Here’s how to take care of that.

FROM owncloud:latest
RUN set -x; \
 apt-get update \
 && apt-get install -y smbclient \
 && rm -rf /var/lib/apt/lists/* \
 && rm -rf /var/cache/apt/archives/*

The above Dockerfile will use the current owncloud:latest image from Docker Hub, and then install smbclient into it.  You want to do the update, install and cleanup in one step so it gets saved as only one layer in the Docker filesystem, saving space.

You can then put that together with the official MySQL Docker Image and a few volumes to have a fully working OwnCloud setup with docker-compose.

version: '2'

services:
  mysql:
    image: mysql:latest
    restart: unless-stopped
    environment:
      - MYSQL_ROOT_PASSWORD=passwordgoeshere
    volumes:
      - ./data/mysql:/var/lib/mysql:rw,Z

  owncloud:
    hostname: owncloud.localdomain
    build: owncloud/
    restart: unless-stopped
    environment:
      - MYSQL_ROOT_PASSWORD=passwordgoeshere
    ports:
      - 8300:80
    volumes:
      - ./data/data:/var/www/html/data:rw,Z
      - ./data/config:/var/www/html/config:rw,Z
      - ./data/apps:/var/www/html/apps:rw,Z
    depends_on:
      - mysql

Create the directories that are mounted there, set the password to something sensible, and docker-compose up !

One thing though.  OwnCloud doesn’t have any built-in account lockout policy, so I wouldn’t go putting this as it is on the ‘Net just yet.  You’ll want something in front of it for security, like nginx.  You’ll also want HTTPS if you’re doing that.

More on that later.

How to convert an MP4 to a DVD and burn it on Linux

If you’re using Vagrant with VirtualBox on Windows, create a new directory, throw the source mp4 in it, then create a Vagrantfile like this;

Vagrant.configure("2") do |config|
  config.vm.box = "bento/ubuntu-16.04"

  config.vm.provider "virtualbox" do |vb|
  vb.customize ["storageattach", :id, "--storagectl", "IDE Controller", "--port", 0, "--device", 0, "--type", "dvddrive", "--passthrough", "on", "--medium", "host:X:"]
  end
end

Edit the host:X: to be the drive letter of your physical DVD drive.

Then bring up the VM with;

vagrant up
vagrant ssh
sudo -s -H

Now that’s done, do this.  You can start from here if you’re already on Linux or have some other means of getting a VM ready.  I assume you’re going to want to make a PAL DVD, and that your DVD is in /dev/sg0 (check with wodim --devices);

apt-get install dvdauthor mkisofs ffmpeg wodim
ffmpeg -i input.mp4 -target pal-dvd video.mpg
export VIDEO_FORMAT=PAL
dvdauthor -o dvd/ -t video.mpg
dvdauthor -o dvd/ -T
mkisofs -dvd-video -o dvd.iso dvd/
wodim -v dev=/dev/sg0 speed=8 -eject dvd.iso

All done.  Assuming everything went well, you have a freshly burned DVD, all using open source Linux software, with no horrible adware that tends to come with Windows DVD burning software.

You can then get rid of the VM with vagrant destroy.

Ansible with Vagrant on Windows

Since I’m converting all my builds and other things to use Ansible, the idea of using Ansible to customize a Vagrant box is very attractive.

I’ve chosen to use the ansible-local provisioner in this case, so that Ansible runs inside the Vagrant box.  I’ll do an example later where this isn’t the case.

Have a look at this gist for some info about how to do this.  Or read on.

Step 1 – the Vagrantfile

In a blank directory, edit a new Vagrantfile.  Make it look something like this;

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|
 # The default ubuntu/xenial64 image has issues with vbguest additions
 config.vm.box = "bento/ubuntu-16.04"

 # Set memory for the default VM
 config.vm.provider "virtualbox" do |vb|
   vb.memory = "1024"
 end

 # Configure vbguest auto update options
 config.vbguest.auto_update = false
 config.vbguest.no_install = false
 config.vbguest.no_remote = true

 # Configure the hostname for the default machine
 config.vm.hostname = "ansible-example"

 # Mount this folder as RO in the guest, since it contains secure stuff
 config.vm.synced_folder "vagrant", "/vagrant", :mount_options => ["ro"]

 # And finally run the Ansible local provisioner
 config.vm.provision "ansible_local" do |ansible|
   ansible.provisioning_path = "/vagrant/provisioning"
   ansible.inventory_path = "inventory"
   ansible.playbook = "playbook.yml"
   ansible.limit = "all"
 end

end

There’s a few things going on here.  First up, we define the default box we’re going to use, the memory allocated to it, our auto-update options and the hostname.

Next up is we define a synced folder that will appear in the Vagrant box.  There is a default, which is for the folder the Vagrantfile is in to appear as /vagrant.  However, this is shared on VirtualBox with R/W access, which means that the box can modify your original files (including its own Vagrantfile).  Not necessarily bad, but I don’t like the idea of that very much.

Lastly, we define the Ansible provisioner.  This will simply run the playbook that’s in the vagrant/provisioning subfolder of the Vagrantfile against all hosts.

Step 2 – Create Playbook

Do the following to create the rest of the structure (from within the directory your Vagrantfile is in);

mkdir -p vagrant/provisioning

Now, you’ll need to create an ansible.cfg in that directory, like this;

[defaults]
host_key_checking = no

[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes

The parameters are necessary to avoid Ansible having a freak-out about SSH keys and whatnot when deploying.  Of course, if you have just one host, you don’t need to worry about it.

Next, you need an inventory spec;

ansible-example ansible_connection=local

This forces deployments against the machine we’re deploying to use the local connection type.

And lastly, a really basic playbook to test it out;

---

- hosts: ansible-example
 tasks:
 - copy: content="IT WORKS!\n" dest=/home/vagrant/ansible_runs

...

Step 3 – Run it!

Now we’ve set up the most basic structure, bring it up!

$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Importing base box 'bento/ubuntu-16.04'...
==> default: Matching MAC address for NAT networking...
==> default: Checking if box 'bento/ubuntu-16.04' is up to date...
==> default: Setting the name of the VM: ansible-example_default_1472187535117_41803
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
 default: Adapter 1: nat
==> default: Forwarding ports...
 default: 22 (guest) => 2222 (host) (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
 default: SSH address: 127.0.0.1:2222
 default: SSH username: vagrant
 default: SSH auth method: private key
 default:
 default: Vagrant insecure key detected. Vagrant will automatically replace
 default: this with a newly generated keypair for better security.
 default:
 default: Inserting generated public key within guest...
 default: Removing insecure key from the guest if it's present...
 default: Key inserted! Disconnecting and reconnecting using new SSH key...
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM...
 default: The guest additions on this VM do not match the installed version of
 default: VirtualBox! In most cases this is fine, but in rare cases it can
 default: prevent things such as shared folders from working properly. If you see
 default: shared folder errors, please make sure the guest additions within the
 default: virtual machine match the version of VirtualBox you have installed on
 default: your host and reload your VM.
 default:
 default: Guest Additions Version: 5.0.26
 default: VirtualBox Version: 5.1
==> default: Setting hostname...
==> default: Mounting shared folders...
 default: /vagrant => C:/cygwin64/home/username/ansible-example/vagrant
==> default: Running provisioner: ansible_local...
 default: Installing Ansible...
 default: Running ansible-playbook...

PLAY [ansible-example] ****************************************************************

TASK [setup] *******************************************************************
ok: [ansible-example]

TASK [copy] ********************************************************************
changed: [ansible-example]

PLAY RECAP *********************************************************************
ansible-example : ok=2 changed=1 unreachable=0 failed=0


$

And to prove that the playbook actually, really, did run;

$ vagrant ssh
Welcome to Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-31-generic x86_64)

 * Documentation: https://help.ubuntu.com
 * Management: https://landscape.canonical.com
 * Support: https://ubuntu.com/advantage
Last login: Fri Aug 26 05:01:58 2016 from 10.0.2.2
vagrant@ansible-example:~$ cat ansible_runs
IT WORKS!
vagrant@ansible-example:~$

You can then re-run the playbook any time you like with vagrant provision .

The main catch with running Ansible like this is that it actually installs Ansible on the Vagrant box.  You can get around this by running Ansible on your Vagrant host.  More on this later.

Vagrant on Cygwin/Virtualbox Quickstart

So, you want to try out Vagrant, and you’re using Windows with Cygwin?  Have I got something for you!

Preparing the Environment

Firstly, get Oracle VirtualBox installed.  I personally prefer VMware Workstation, but VirtualBox works better for this.  Also get the extensions while you’re at it.

Next, go and install Vagrant, and use the default settings.  Now we’re going to have to manually patch a file in the Vagrant source.  Go to /cygdrive/c/HashiCorp/Vagrant/embedded/gems/gems/vagrant-1.8.5/plugins/guests/linux/cap in Cygwin, and edit public_key.rb .  At line 57, make the code look like the bit that’s highlighted here;

if test -f ~/.ssh/authorized_keys; then
  grep -v -x -f '#{remote_path}' ~/.ssh/authorized_keys > ~/.ssh/authorized_keys.tmp
  mv ~/.ssh/authorized_keys.tmp ~/.ssh/authorized_keys
  chmod 0600 ~/.ssh/authorized_keys
fi

This won’t be necessary in a newer version of Vagrant, but it is required in 1.8.5 for some boxes to work.

Next up, bring up your Cygwin prompt, and do this.  This will remove the default VMware provider (if it’s installed), and put in a plugin that automatically updates VirtualBox Guest Additions (optional, but very useful)

vagrant plugin uninstall vagrant-vmware-workstation
vagrant plugin install vagrant-vbguest
vagrant version

It should spit out that you’re running an up-to-date Vagrant.  Great.

Bringing up your first Vagrant box

Now, I’m a CentOS fan, so we’ll be bringing up a CentOS box first.  From your Cygwin prompt, do this;

vagrant box add centos/7 --provider virtualbox
mkdir vagrant-test && cd vagrant-test
vagrant init centos/7
vagrant up
vagrant ssh

If everything’s been done correctly, you’ll find yourself in a shell on your new Vagrant box.  By default, the VM will be using NAT.  Poke around, and when done, exit and do;

vagrant destroy -f
cd ..
rm -rf vagrant-test

To clean everything up.  After cleanup, you’ll still be left with the centos/7 box cached, you can ditch that with vagrant box remove centos/7 .

All done!  You’ve got a working Vagrant environment on Windows, running under Cygwin against a VirtualBox provider.  Magic!

Bash on Windows – X Server!

It turns out that you can use Bash on Windows 10 to run X applications, including through ssh tunnels.  Here’s how.

First, go and install XMing.  I’d strongly suggest not allowing it to get access to your network, so it stays on localhost.  This is so that an attacker can’t draw stuff on your screen through your X server.

Run XMing, put it in your startup if you want.  You now have an X server.  Next up, you’ll need to fire up Bash on Windows, and run sudo apt-get install xauth.  Then edit your ~/.bashrc .  Right down the bottom, add the following;

export DISPLAY=localhost:0
xauth generate $DISPLAY

This causes your session to be configured so that you can use X applications and they’ll be pointed to your X server.  It also provides the correct X authentication tokens to make things like ssh work.

Now, log out and back in again.  Start up Bash for Windows, then you can run stuff.

Bash on Ubuntu on Windows 10 – Teething Issues

Just set up the new Bash on Windows 10 feature that comes with the Anniversary Update.  It’s not bad.  But there’s a few annoying things it does that grind my gears.

The default umask is 0000

Yeah.  That’s what I said.  This means all files you create from the Bash shell are read/write/execute to EVERYBODY.  Not smart.  SSH hates that.

echo "umask 0022" >> /etc/profile

To fix that one.

Sudo doesn’t inherit root’s HOME

This causes many commands (pip for example) to dump files into your user directory as root, resulting in an inability to modify files in your own homedir.  Not great.

Add the following in your /etc/sudoers somewhere;

Defaults always_set_home

More as I come across it.

ELK Stack in Docker with NGINX

I’ve done a bit of work in the past few days modifying a Docker ELK Github repository I came across, to make it more suited to my needs.

You can find my efforts at my Github repository.  This setup, when brought up with docker-compose up, will put together a full ELK stack composed of the latest versions of ElasticSearch, Logstash, Kibana, all fronted by NGINX with login required.

The setup persistently stores all Elasticsearch data into the ./esdata directory, and accepts syslog input on port 42185 along with JSON input on port 5000.

In order to access Elasticsearch, use the Sense plugin in Kibana.  You can get at Kibana on port 5601, with a default login of admin/admin.  You can change that by using htpasswd and creating a new user file at ./nginx/htpasswd.users .

A couple of things about Docker in this setup.  When you link containers, it’s not necessary to expose ports between the containers.  Exposing is only required to make a port accessible from outside Docker.  When containers are linked, they get access to all ports on the linked container.

This means that it’s not required to specifically expose all the internal ports of the stack – you only have to expose the entry/exit points you want on the stack as a unit.  In this case, that’s the entry ports to Logstash and the entry point in nginx.

Also, if you use a version 2 docker-compose specification, Docker Compose will also create an isolated network bridge just for your application, which is great here.  It will also manage dependencies appropriately to make sure the stack comes up in the right order.

Oh yeah.  If you bring up the stack with docker-compose up, press Ctrl+\ to break out of it without taking the stack down.

Magic!

NFS Persistent Volumes with OpenShift

Official documentation here.  Following is a (very!) brief summary of how to get your Registry in Openshift working with an NFS backend.  I haven’t been able yet to get it to deploy cleanly straight from the Ansible installer with NFS, but it is pretty straightforward to change it after initial deployment.

NOTE – A lot of this can probably be done in much, much better ways.  This is just how I managed to do it by bumbling around until I got it working.

Creating the NFS Export

First up, you’ll need to provision an NFS export on your NFS server, using the following options;

/srv/registry -rw,async,root_squash,no_wdelay,mp @openshift

Where ‘@openshift’ is the name of a group in /etc/netgroup for all your OpenShift hosts.  I’m also assuming that it a mountpoint, hence ‘mp’.

We then go to that directory and set it to be owned by root, gid is 5555 (example), and 0770 access.

Creating the Persistent Volume

Now first, we need to add that as a persistent volume to OpenShift.  I assume it’ll be 50Gb in size, and you want the data retained if the claim is released.  Create the following file and save as nfs-pv.yml somewhere you can get at it with the oc command.

---
 apiVersion: v1
 kind: PersistentVolume
 metadata:
   name: registry-volume
 spec:
   capacity:
     storage: 50Gi
   accessModes:
     - ReadWriteMany
   nfs:
     path: /srv/registry
     server: nfs.localdomain
   persistentVolumeReclaimPolicy: Retain
...

Right.  Now we change into the default project (where the Registry is located), and add that as a PV;

oc project default
oc create -f nfs-pv.yml
oc get pv

The last command should now show the new PV that you created.  Great.

Creating the Persistent Volume Claim

Now you have the PV, but it’s unclaimed by a project.  Let’s fix that.  Create a new file, nfs-claim.yml where you can get at it.

---
 apiVersion: v1
 kind: PersistentVolumeClaim
 metadata:
   name: registry-storage
 spec:
   accessModes:
     - ReadWriteMany
   resources:
     requests:
       storage: 50Gi
...

Now we can add that claim;

oc project default
oc create -f nfs-claim.yml
oc get pvc

The last command should now show the new PVC that you created.

Changing the Supplemental Group of the Deployment

Right.  Remember we assigned a GID of 5555 to the NFS export?  Now we need to assign that to the Registry deployment.

Unfortunately, I don’t know how to do this with the CLI yet.  So hit the GUI, find the docker-registry deployment, and click Edit YAML under Actions.

In there, scroll down and look for the securityContext tag.  You’ll want to change this as follows;

securityContext:
  supplementalGroups:
  - 5555

This sets the pods deployed with that deployment to have a supplemental group ID of 5555 attached to them.  Now they should get access to the NFS export when we attach it.

Attaching the NFS Storage to the Deployment

Again, I don’t know how to do this in the CLI, sorry.  Click Actions, then Attach Storage, and attach the claim you made.

Once that has finished deploying, you’ll find you have the claim bound to the deployment, but it’s not being used anywhere.  Click Actions, Edit YAML again, and then find the volumes section.  Edit that to;

volumes:
  -
    name: registry-storage
    persistentVolumeClaim:
      claimName: registry-storage

Phew.  Save it, wait for the deployment to be done.  Nearly there!

Testing it out

Now, if you go into Pods, select the pod that’s currently deployed for the Registry, you should be able to click Terminal, and then view the mounts.  You should see your NFS export there, and you should be able to touch files in there and see them on the NFS server.

Good luck!

Deploying a Quickstart Template to Openshift (CLI)

Repeating what we did in my previous post, we’ll deploy the Django example project to OpenShift using the CLI to do it.  This is probably more attractive to many sysadmins.

You can install the client by downloading it from the OpenShift Origin Github repository.  Clients are available for Windows, Linux, and so-on.  I used the Windows client, and I’m running it under Cygwin.

First, log into your OpenShift setup;

oc login --insecure-skip-tls-verify=true https://os-master1.localdomain:8443

We disable TLS verify since our test OpenShift setup doesn’t have proper SSL certificates yet.  Enter the credentials you use to get into OpenShift.

Next up, we’ll create a new project, change into that project, then deploy the test Django example application into it.  Finally, we’ll tail the build logs so we can see how it goes.

oc new-project test-project --display-name="Test Project" --description="Deployed from Command Line"
oc project test-project
oc new-app --template=django-example
oc logs -f bc/django-example

After that finishes, we can review the status of the deployment with oc status;

$ oc status
In project Test Project (test-project) on server https://os-master1.localdomain:8443

http://django-example-test-project.openshift.localdomain (svc/django-example)
 dc/django-example deploys istag/django-example:latest <-
 bc/django-example builds https://github.com/openshift/django-ex.git with openshift/python:3.4
 deployment #1 deployed about a minute ago - 1 pod

View details with 'oc describe <resource>/<name>' or list everything with 'oc get all'.

Ok, looks great.  You can now connect to the URL above and you should see the Django application splash page.

Now that worked, we’ll change back into the default project, display all the projects we have, and then delete that test project;

$ oc project default
Now using project "default" on server "https://os-master1.localdomain:8443".

$ oc get projects
NAME DISPLAY NAME STATUS
default Active
test-project Test Project Active

$ oc delete project test-project
project "test-project" deleted

$

Fantastic.  Setup works!