NGINX Rate Limiting for Unsecured Apps

Some applications don’t properly support IP blackholing in the case of failed login attempts.  There’s a few ways to handle that, but one nice way is to make use of nginx in the front of the application to apply rate limiting.

I’m considering using nginx as a reverse proxy for your application here as out of scope for this article.  It’s a good idea to get used to using it to front your applications and control access to them.

Rate Limiting in NGINX

We’ll be making use of the ngx_http_limit_req module.  Simply put, you create a zone using limit_req_zone, then define allowed locations that will use the zone using limit_req.

The mental abstraction you can use for the zone is a bucket.  The zone definition describes a data table which will hold IP addresses (in this case), and how many requests they’ve made.  The requests (which are water in the bucket in this analogy) flow out a ‘hole’ in the bucket at a fixed rate.  Therefore, if requests come in faster than the rate, they will ‘fill’ the bucket.

The ‘size’ of the bucket is determined by the parameters you’ve set on limit_req for the allowed burst size.  So a large burst size enables a lot of requests to be made in a time period that exceeds the recharge rate, but it’ll fill the bucket up eventually.  They then slowly recharge at the described rate.

IMPORTANT – If you do not use the nodelay option in limit_req, what happens is that nginx delays incoming requests to force them to match the rate – irrespective of bursts.  In this article, we’ll use nodelay, because we want to flat out return errors when the burst size is exceeded.

Configuring Rate Limiting

In the http context of your nginx.conf, insert a zone definition like this;

limit_req_zone $binary_remote_addr zone=myzone:10m rate=1r/m;

This defines a new zone named myzone which will be populated with the binary forms of remote addresses of clients of size 10Mb.  This will hold a large number of addresses, so it should be fine.  It will recharge limits at a rate of one per minute (which is very slow, but this is intentional, as you’ll see).

Then, let’s assume your app has a login page that you know is at /app/login, and the rest of the app is under /.  You could write some locations like this;

location = /app/login {
    limit_req zone=myzone burst=10 nodelay;

    # whatever you do to get nginx to forward to your app here

location / {
    # whatever you do to get nginx to forward to your app here

That way, calls to /app/login will be rate limited, but the rest of your app will not.

In the above example, calls to /app/login from a single IP will be rate limited such that they can make a burst of 10 calls without limits, but then are limited to an average rate of one per minute.

For something that’s a login page, this should be sufficient to allow legitimate logins (and likely with a mistyped password or two), but it’ll put a big tarpit on dictionary attacks and the like.

Blog now uses HTTPS!

With the release of LetsEncrypt to the public, I’ve reconfigured my blog server to use HTTPS.  Setup was pretty straightforward, I just followed the nginx setup guide.  Notably though, my highly restrictive nginx setup didn’t work with the rules they described.  Instead, I needed this fragment to get the Let’s Encrypt authentication challenge to pass;

Notably, the certs issued only last for 90 days, so you will need some way to renew them automatically.  The above guide has that.

Let’s see how it goes.

Dynamic DNS filtering for NGINX

Nginx is something that I’ve really come to appreciate since I moved my blog across to my own server.  However, it’s lacking a really great feature that I would love to have – the ability to dynamically update rules through DNS resolution.  I don’t have a static IP address for my home Internet connection, but I do use dynamic DNS.

In its default configuration, Nginx can’t do this (largely for performance reasons).  There are modules available for Nginx for this (like this), but I didn’t want to use one because there isn’t a whole lot of point.  So I made my own.

Nginx configurations revolve around include files.  What if we had a scripted process that generates an include file based on a DNS resolve and then reloads Nginx?  That’s exactly what I did.

Firstly, let’s assume the dynamic DNS record of your home connection is myhome.local .  Make a script in /etc/cron.daily or /etc/cron.hourly (depending how often you want nginx to reload, don’t do it too often);

host myhome.local | grep "has address" | sed 's/.*has address //' | awk '{print "allow\t\t" $1 ";\t\t# Home IP" }' > /etc/nginx/conf.d/
service nginx reload > /dev/null 2>&1

Now, when that script runs, a file will be created at /etc/nginx/conf.d/ that looks like this;

allow ;                # Home IP

From there, it’s a simple matter to make an Nginx rule to let things coming from that IP through, for example;

location /zomgitworks {
    include /etc/nginx/conf.d/;
    deny all;

    alias /var/www/html/zomgitworks;

And now when you call http://yournginxbox/zomgitworks, you will get a 200 OK and content when you’re on your home IP, or a 403 Forbidden if you’re not.  Notably, if the DNS name doesn’t resolve for some reason, the generated file is blank so it does the right thing anyway (it just denies all access).

Of course, if your home IP changes, the rule will break until the next time the cron job runs.  You can run it yourself, of course.  So this won’t be suitable for things that change IP a lot (you should use the module for that), but it should be fine for things that change IP infrequently.

Auto-Restarting a Service with Nagios

I haven’t worked out why yet, but this seems to be a common theme – the PHP/FastCGI service dies periodically, which causes outages with my blog (Nginx does not like it if the back end goes away).  So, I need a solution to fix this.  Enter Nagios!

Nagios is able to have customized event handlers.  Those event handlers can be set up to perform any action you want – such as restarting a service.  So, we’ll use Nagios to restart the service every time it dies.

First, create a script in /usr/local/lib64/nagios/plugins/eventhandlers/restart-fastcgi ;

# Restarts the php-fpm FastCGI service if it dies

case "$1" in
        case "$2" in
                case "$3" in
			echo -n "Starting Fast-CGI service (3rd soft critical state)..."
			sudo /sbin/service php-fpm start | /bin/mail -s "[] FastCGI Restarted" root
			echo -n "Starting Fast-CGI service ..."
			sudo /sbin/service php-fpm start | /bin/mail -s "[] FastCGI Restarted" root
exit 0

Ok, now we’ll need to configure sudoers to allow the nagios user to run ‘service start php-fpm‘ without credentials.  Add this to your sudoers with visudo;

Defaults:nagios         !requiretty,visiblepw
Cmnd_Alias      NAGIOS_START_PHPFPM = /sbin/service php-fpm start
nagios          ALL=(root)      NOPASSWD: NAGIOS_START_PHPFPM

Now, we’ll test that we can actually do it.  As root, do this;

su - nagios
/usr/local/lib64/nagios/plugins/eventhandlers/restart-fastcgi CRITICAL SOFT 3

You should then get an email sent to root saying it’s starting the service.  Obviously it won’t actually DO it (it’s already running).  Check in your /var/log/secure that the sudo command worked.  If so, great!  Now we need to set up Nagios itself to do the restart.

First, we’ll define a command to do the restart (note, I use $USER8$ to point to the local event handlers folder);

define command{
        command_name    restart-fastcgi

Then we’ll add that event handler to the service check we already have in place for checking our FastCGI service;

define service{
        use                     generic-service
        host_name               yourhostnamehere
        service_description     PHP-FPM Service
        max_check_attempts      4
        event_handler           restart-fastcgi
        flap_detection_enabled  0
        check_command           check_local_procs!0:!1:!RSDT -C php-fpm

After that, everything should work.  Don’t forget to restart Nagios.  Specifically, you want max_check_attempts to be at least one more than the limit you set in the script, since on the third SOFT failure it will try a restart – you probably don’t want Nagios yelling at you about a critical error (and going to a HARD state) before it’s tried a restart.  Then again, you might.  Change it as you want.

Now, we can be brave and manually stop the php-fpm service and watch Nagios to see if it restarts.  It should, after a few minutes.  You can tune the script above to make it do the restart faster (on the first soft fail if you want) if you want.

Good luck!

Protecting Apache with an nginx Reverse Proxy

Nginx is a multi-purpose web server / reverse proxy which is commonly used to front busy websites.  It can also be used in reverse proxy mode to help secure websites from unexpected vulnerabilities.  It also allows you to do some pretty cool stuff with redirection and can serve up content all on its own.  In this example, we’ll just be using nginx to protect specific content.

As is usual on this blog, I’m assuming you’re running CentOS 6.  Installation methods vary for other distributions.


The planned architecture for this project is such that nginx triages requests from the Internet, and then translates and passes those requests to the local Apache web server as required.  Therefore, we have the following setup;

  • nginx listens on port 80 and 443, on the external-facing interface (ie, the one that Internet users will connect to.  Nginx terminates any SSL connectivity at its interface.
  • Apache only listens on localhost ( port 80.  Specifically, this means that Apache will never directly serve content to anything outside of this machine.
  • Apache can be configured with virtual hosts if this is desirable, but this is unnecessary because we can handle most of the URL translation and virtual hosting tasks internally in nginx
  • nginx will reverse proxy connections from the Internet into the local Apache web server.  This means that nginx needs some way to tell the Apache web server what the real IP address of incoming connections is.  It does this with the X-Forwarded-For header.
  • Your monitoring solution needs some way to cleanly identify whether the thing listening on port 80 is nginx or if it’s Apache, in case there’s a misconfiguration.

Installing nginx

Installation of nginx is very easy.  You’ll need two main components – nginx itself, and mod_rpaf for Apache.  You will need EPEL set up in order to install nginx, which you can get from the link.

yum install nginx

Once you’ve got nginx installed, don’t start it.  We need to configure a lot of things first.

Installing mod_rpaf for Apache

In order for your Apache access logs to look normal (ie, not have everything coming from localhost), you’ll need to set up mod_rpaf.  mod_rpaf converts the X-Forwarded-For header that was passed in by nginx into what looks like a normal source address for Apache.

In order to build mod_rpaf, you’ll need to do a few things.  Unfortunately there’s no tidy RPM package that I’ve found for CentOS 6, so you’ll have to build it yourself.

yum install -y httpd-devel 
tar xvfz mod_rpaf-0.6.tar.gz
cd mod_rpaf-0.6
sed -ie 's/apxs2/apxs/' Makefile
make rpaf-2.0
make install-2.0

After that’s done, you can create a /etc/httpd/conf.d/mod_rpaf.conf with the following content to enable mod_rpaf and configure it.

LoadModule rpaf_module modules/

<IfModule mod_rpaf-2.0.c>
RPAFenable On
RPAFsethostname on

Configuring Apache

Configuration changes to Apache are pretty straightforward.  Besides the mod_rpaf config change as above, we need to edit /etc/httpd/conf/httpd.conf and change the listen address like this;


This will make sure that Apache only listens on port 80 on the loopback interface, and does not listen on the external interfaces.  Restart apache with service httpd restart and then try this (assuming your external interface is

telnet 80

You should get no response, which tells you that nothing is listening on port 80.  If you wanted to, you could put Apache on another port (81, say), but I prefer to make it only listen to localhost.

Configuring nginx

First, some assumptions about the configuration.

  • You want nginx to listen on port 80 only (we’ll talk about SSL termination later).
  • You want nginx to only serve requests which have a valid Host header (this is a good idea, since it’ll block most exploit bots which hit you by IP address and no host header)
  • You want the URL to redirect through to your local Apache instance
  • You want the URL to redirect through to your Apache instance
  • You want the URL to redirect to http://localhost/blog on your Apache instance
  • You want the URL to redirect through to Apache, but only for specific IP addresses
  • You want a URL to return a 200 if nginx is working
  • The IP address of the interface that nginx will listen on is

In /etc/nginx/conf.d, move all the files there somewhere else.  We don’t want them.  Now, create a new config.conf , and let’s get working.

Defining the listeners

First, the listener.  We define two listeners on port 80 and include a set of locations.  We’ll also define a default listener which just rejects everything.

server {
  include /etc/nginx/conf.d/;

server {
  include /etc/nginx/conf.d/;

server {
  return 444;

It’s perfectly OK to have two listeners defined on the one port, as long as they have different server_names.  In this case, a host header of will serve whatever locations are in and a host header of will serve whatever locations are in  Lacking a host header gets a HTTP 444 error return.

Defining locations for

Now, we’ll create a new file and define some locations in it.  We’ll have a catch-all rule down the bottom to reject anything else not specifically allowed.

# Return a HTTP 200 if is called
location /nginx-works {
  return 200;

# Pass through (exactly!) to Apache
location = /image.gif {
  proxy_pass http://localhost;
  include /etc/nginx/conf.d/;

# Pass through (and sub-URIs) to Apache
location /application {
  proxy_pass  http://localhost;
  include /etc/nginx/conf.d/;

# Pass through to Apache for specific IPs
location /monitoring {
  deny all;
  proxy_pass http://localhost;
  include /etc/nginx/conf.d/;

# Deny everything else
location / {
  return 444;

With that defined, we have all the locations we want to pass defined, and block everything else.  What will happen in the proxy_pass sections is that the request will be reverse proxied to localhost (ie, Apache), with a configuration to be specified next.

Defining reverse proxy settings

Now, we need to define a, which defines the default reverse proxy configuration we want to use.  Use something like this;

proxy_redirect          off;
proxy_set_header        Host            $host;
proxy_set_header        X-Real-IP       $remote_addr;
proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size    10m;
client_body_buffer_size 128k;
proxy_connect_timeout   90;
proxy_send_timeout      90;
proxy_read_timeout      90;
proxy_buffers           32 4k;

What this will do is set up various configuration settings for nginx.  In particular, note the three headers we’re configuring.  First, we ensure that the Host header is set to the same host that the user originally requested.  Secondly, we make sure that the X-Real-IP is set to the IP address that the user came from, and then we set X-Forwarded-For correctly.  Both of these allow mod_rpaf to be able to correctly interpret the IP address that the user originally came from.

If you don’t do this, all your Apache access logs will appear to have all accesses coming from localhost, since nginx is a reverse proxy.  That isn’t ideal.

Defining locations for

We make a new, and then make it look like this;

location / {
  # Doing something strange like trying to fetch /blog/blog/* results in just /blog/*
  rewrite ^/blog$ / redirect;
  rewrite ^/blog(.*)$ $1 redirect;

  # Otherwise just add /blog to the front and pass to the backend
  rewrite ^(.*)$ /blog$1 break;

  proxy_pass http://localhost;
  include /etc/nginx/conf.d/;

This will allow you to redirect your blog which may be at http://localhost/blog to work properly when people access it with .

Testing it out

Restart nginx like this;

chkconfig nginx on
service start nginx

Now, assuming it started OK (fix it if it didn’t), you should be able to test various URL fetches with curl to see what happens.  Try from a different machine, as follows;

# These should work
curl -v
curl -v
curl -v

# These should blow up
curl -v
curl -v

# This should redirect a few times and then wind up at your blog
curl -v -L

Try out various things.  Remember to test things that should NOT work as well as things that should.

Good luck!