Netflow with ELK Stack and OpenWRT

Now we’re getting into some pretty serious magic.  This post will outline how to put together OpenWRT and ELK Stack to collect network utilization statistics with Netflow.  From there, we can use Kibana to generate visualizations of traffic data and flows and whatever else you want to leverage with the power of Elasticsearch.

I’m using a virtualized router instance running OpenWRT 15.05.1 (Chaos Calmer) on KVM with the Generic x86 build.  Using a hardware router is still doable, but you’ll need to be careful about CPU utilization of the Netflow exporter.  Setting this up will require a number of components, which we’ll go through now.

You will need an OpenWRT box of some description, and an ELK Stack already configured and running.

OpenWRT Setup

You’ll need to install softflowd, which is as easy as;

opkg update
opkg install softflowd

Then edit /etc/config/softflowd and set the destination for flows to go to something like;

option host_port 'netflow.localdomain:9995'

Start up the Softflow exporter with /etc/init.d/softflowd start and it should be working.

Note, default config will be using Netflow version 5.  Let that stand for now.  Also, leave the default interface on br-lan – that way it’ll catch flows for all traffic reaching the router.

Logstash Configuration

If you’re using the ELK Stack Docker project like me, you’ll need to set up the Docker container to also listen on port 9995 UDP.  At any rate, you need to edit your logstash.conf so that you have the following input receiver;

# Netflow receiver
input {
  udp {
    port => 9995
    type => netflow
    codec => netflow
  }
}

This is an extremely simple receiver which takes in Netflow data on port 9995, sets the type to netflow and then processes it with the built-in Netflow codec.

In your output transmitter, you’ll then want something like this example;

output {
        if ( [type] == "netflow" ) {
                elasticsearch {
                        hosts => "elasticsearch:9200"
                        index => "logstash-netflow-%{host}-%{+YYYY.MM.dd}"
                }
        } else {
                elasticsearch {
                        hosts => "elasticsearch:9200"
                        index => "logstash-%{type}-%{+YYYY.MM.dd}"
                }
        }
}

What this does is pretty straightforward.  Everything gets sent to the Elasticsearch engine at elasticsearch:9200.  But, messages with the type of netflow get pushed into an index that has the IP address that the flow was collected from in it (this will probably be your router).

Restart Logstash and you should start getting flows in within a few minutes.

Kibana Setup

From there, just go into Kibana and add a new index pattern for logstash-netflow-*.  You can then visualize / search all your Netflow data to your heart’s content.

Nice!

ELK Stack in Docker with NGINX

I’ve done a bit of work in the past few days modifying a Docker ELK Github repository I came across, to make it more suited to my needs.

You can find my efforts at my Github repository.  This setup, when brought up with docker-compose up, will put together a full ELK stack composed of the latest versions of ElasticSearch, Logstash, Kibana, all fronted by NGINX with login required.

The setup persistently stores all Elasticsearch data into the ./esdata directory, and accepts syslog input on port 42185 along with JSON input on port 5000.

In order to access Elasticsearch, use the Sense plugin in Kibana.  You can get at Kibana on port 5601, with a default login of admin/admin.  You can change that by using htpasswd and creating a new user file at ./nginx/htpasswd.users .

A couple of things about Docker in this setup.  When you link containers, it’s not necessary to expose ports between the containers.  Exposing is only required to make a port accessible from outside Docker.  When containers are linked, they get access to all ports on the linked container.

This means that it’s not required to specifically expose all the internal ports of the stack – you only have to expose the entry/exit points you want on the stack as a unit.  In this case, that’s the entry ports to Logstash and the entry point in nginx.

Also, if you use a version 2 docker-compose specification, Docker Compose will also create an isolated network bridge just for your application, which is great here.  It will also manage dependencies appropriately to make sure the stack comes up in the right order.

Oh yeah.  If you bring up the stack with docker-compose up, press Ctrl+\ to break out of it without taking the stack down.

Magic!