Netflow with ELK Stack and OpenWRT

James Young · September 27, 2016

Now we’re getting into some pretty serious magic.  This post will outline how to put together OpenWRT and ELK Stack to collect network utilization statistics with Netflow.  From there, we can use Kibana to generate visualizations of traffic data and flows and whatever else you want to leverage with the power of Elasticsearch.

I’m using a virtualized router instance running OpenWRT 15.05.1 (Chaos Calmer) on KVM with the Generic x86 build.  Using a hardware router is still doable, but you’ll need to be careful about CPU utilization of the Netflow exporter.  Setting this up will require a number of components, which we’ll go through now.

You will need an OpenWRT box of some description, and an ELK Stack already configured and running.

OpenWRT Setup

You’ll need to install softflowd, which is as easy as;

opkg update
opkg install softflowd

Then edit /etc/config/softflowd and set the destination for flows to go to something like;

option host_port 'netflow.localdomain:9995'

Start up the Softflow exporter with /etc/init.d/softflowd start and it should be working.

Note, default config will be using Netflow version 5.  Let that stand for now.  Also, leave the default interface on br-lan - that way it’ll catch flows for all traffic reaching the router.

Logstash Configuration

If you’re using the ELK Stack Docker project like me, you’ll need to set up the Docker container to also listen on port 9995 UDP.  At any rate, you need to edit your logstash.conf so that you have the following input receiver;

# Netflow receiver
input {
  udp {
    port => 9995
    type => netflow
    codec => netflow
  }
}

This is an extremely simple receiver which takes in Netflow data on port 9995, sets the type to netflow and then processes it with the built-in Netflow codec.

In your output transmitter, you’ll then want something like this example;

output {
        if ( [type] == "netflow" ) {
                elasticsearch {
                        hosts => "elasticsearch:9200"
                        index => "logstash-netflow-%{host}-%{+YYYY.MM.dd}"
                }
        } else {
                elasticsearch {
                        hosts => "elasticsearch:9200"
                        index => "logstash-%{type}-%{+YYYY.MM.dd}"
                }
        }
}

What this does is pretty straightforward.  Everything gets sent to the Elasticsearch engine at elasticsearch:9200.  But, messages with the type of netflow get pushed into an index that has the IP address that the flow was collected from in it (this will probably be your router).

Restart Logstash and you should start getting flows in within a few minutes.

Kibana Setup

From there, just go into Kibana and add a new index pattern for logstash-netflow-*.  You can then visualize / search all your Netflow data to your heart’s content.

Nice!

Twitter, Facebook