New VPS Provider

Well, cutting off from the old VPS provider and onto a new one was remarkably painless.  The new provider is Iniz, I picked up a plan for GBP 4.50 a month, that includes 2Gb of RAM, 4 vCPU, 2Gb RAM, 1Gb swap, 100Gb disk, 1Gb/s (max) pipe, and 1Tb of monthly bandwidth.

Considering that a VPS runs a shared kernel (and therefore shared system cache!), the 2Gb memory allocation is actually quite significant since it’s only used for applications you run.  The deal seems pretty good, we’ll have to see how they go for uptime and networking etc.

There’s a (minor!) catch though.  Seems that Iniz doesn’t supply reverse DNS functionality, whereas BitAccel did.  Not a huge deal, but having reverse DNS would have been nice.

Maintenance in progress (sigh)

Turns out my VPS provider has had a major outage, which bodes pretty badly considering it’s my first month with them.  Service has been out for 24 hours now, with no real updates or any end in site.  This comes off the back of a week’s worth of random small outages and severe packet loss.

Moving to a new provider.  Damn glad I set up backups, since I can’t even get at the old VPS’s console right now.

Capturing MySQL Backups with BackupPC

BackupPC with its default configuration will catch the MySQL databases, but I don’t consider that kind of backup to be particularly reliable.  Much safer is to do a mysqldump of any relevant databases and then capture that with BackupPC.

Firstly, you’ll need to create a special user on your MySQL install who will be used to do the backups.  Do this in mysql as root.  We’ll assume the database you want to back up is your_database, and the backup user will be backup.

GRANT USAGE ON *.* TO backup@localhost IDENTIFIED BY 'passwordgoeshere';
GRANT SELECT,LOCK TABLES ON your_database.* to backup@localhost;
FLUSH PRIVILEGES;

At the end of that, you’ll have a new user.  Now, in the home directory of the account which will run mysqldump (we’ll assume this is backup), create a file .my.cnf by running the following commands;

sudo -u backup bash
cd
cat > ~/.my.cnf << EOF
[mysqldump]
user = backup
password = passwordgoeshere
EOF
chmod og= .my.cnf

This will let your backup user run mysqldump.  Now, we’ll test it;

sudo -u backup bash
cd
mysqldump -u backup -h localhost your_database | gzip > test.sql.gz

You should wind out with a test.sql.gz which will be a dump of all the commands necessary to rebuild that database.  Now we create a new script as backup to manage our backups;

sudo -u backup bash
cd
mkdir your_database
[Create the mysql_backup script below]
chmod u+x ~/mysql_backup

The script you want to create is;

#!/bin/bash
DBNAME=your_database
HOME=/home/backup
FILENAME=$HOME/$DBNAME/`date "+%Y%m%d.%H%M%S"`.sql.gz
mysqldump -u backup -h localhost $DBNAME | gzip > $FILENAME
chmod og= $FILENAME
find $HOME/$DBNAME -name "*.sql.gz" | sort -r | tail -n +8 | xargs rm -f

That will make a backup of your database and then throw it into the folder we created.  It will keep the 7 latest backups, and delete any older ones.  You can then back that up with BackupPC and get a consistent backup.

The last step is to add it to your crontab;

crontab -u backup -e
0 0 * * * /home/backup/mysql_backup
[Ctrl-D]

And that will make the backup run at midnight each night.  Enjoy!

Remote backing up a Linux host over the ‘net with BackupPC

I’ve recently moved my blog across to a VPS service.  A VPS is akin to a chroot jail, where you have your own server, but you’re running a common Linux kernel.  This also means you don’t really have your own filesystem – so for the sakes of paranoia I don’t want the VPS to have any privileged data on it, and I don’t want the VPS to be able to remote back into my main infrastructure to back itself up.

Enter SSH+rsync+BackupPC.  Putting these together allows you to back up a remote host that has SSH access in a secure way.  We’ll discuss how to get backups running through BackupPC to back up a remote host through SSH.

Step 1 – Connectivity.

We’ll assume that you have a working SSH connection, and you have BackupPC installed on the backup host (we’ll call this SERVER).  On the client machine (the one you want to back up), you’ll need to create a new user account;

adduser backup -c 'Backup SSH Account'

Don’t set a password on this account, you will never log in using it, ever.  Now, on the server machine you will need to switch over to the backuppc account and create an appropriate SSH ID;

sudo -u backuppc bash
ssh-keygen

This will create a new file /var/lib/BackupPC/.ssh/id_rsa.pub which you’ll want to grab the contents of.  On the client, SUDO as the backup account and do this;

sudo -u backup bash
cd
mkdir .ssh
cat >> .ssh/authorized_keys
PASTE IN YOUR ID_RSA.PUB FILE HERE AND THEN PRESS CTRL-D
chmod -R og= .ssh

Now you’ve got the key in place on both sides of the connection.  Now, on the server machine, you’re going to switch to the backuppc account and try to ssh to your client;

sudo -u backuppc bash
ssh backup@client.domain.com

You should be prompted to accept the host key (do it!), and then you should see a prompt as your backup user on the client machine.  Exit and do it again, and there should be no prompting.  It’s critical you see no prompts, since any prompting will cause weird breakages in BackupPC.  Make sure that the hostname you ssh in as is the exact host name you will use when configuring BackupPC.

Connectivity set up.  Now we’ll configure SUDO on the client to work properly for the backup user.

SUDO Access

On the client server, add the following lines to your /etc/sudoers file (use visudo for this);

# Allows backup to take backups only
#backup       ALL=NOPASSWD: /usr/bin/rsync --server --sender *

# Allows backup to do backups and restores
backup        ALL=NOPASSWD: /usr/bin/rsync --server *

Note that in this fragment, the backup user will be able to do both backups and restores.  This may be dangerous, since a crafty attacker who gets access to the backup account can push arbitrary files up onto the client.  Use your discretion.

Lastly, you’ll have to edit your /etc/sudoers file and change the line that says

Defaults requiretty

To say

Defaults !requiretty

If you don’t do that, you can’t ssh in as a non-tty (ie, backuppc can’t work).  Once that’s in place, you’ll need to run it once to avoid the sudo prompt, like this (from the server);

sudo -u backuppc bash
ssh backup@client.domain.com
sudo /usr/bin/rsync --server --help

Run it again if you get prompted to verify that all you see is the rsync help page.  If you do, then you’re all ready to go.  Next we configure BackupPC.

BackupPC Configuration

Define a new host in /etc/BackupPC/hosts like you usually would.  For the config file for that client, you’ll want something like this;

$Conf{BackupFilesExclude} = {
 '*' => [
    '/cgroup',
    '/data',
    '/dev',
    '/lost+found',
    '/misc',
    '/mnt',
    '/net',
    '/proc',
    '/selinux',
    '/sys',
    '/tmp',
    '/var/tmp',
    '/var/cache/yum'
  ]
};
$Conf{ClientNameAlias} = 'client.domain.com';
$Conf{PingMaxMsec} = '1000';
$Conf{XferMethod} = 'rsync';
$Conf{RsyncClientCmd} = '$sshPath -q -x -l backup $host /usr/bin/sudo $rsyncPath $argList+ $shareName';
$Conf{RsyncClientRestoreCmd} = '$sshPath -q -x -l backup $host /usr/bin/sudo $rsyncPath $argList+ $shareName';

The customized rsync commands gets BackupPC to run rsync through an SSH tunnel.  Now, run your backup, and hope!

Gah!  It broke!

The most likely thing you’ll see will be something about Pipe broken or invalid data or some such.  That usually means that the ssh tunnel got prompted somewhere, so no sensible rsync data came through.  Re-run the SSH connectivity process, and be sure you are specifying exactly the same name in the ssh command as you have set for $Conf{ClientNameAlias} above.

Good luck.

Offsite Backups with Amazon S3

Recently, I’ve been worrying about how I have a lot of documentation in my home, and some pieces of it (receipts and other such proof of purchase) would be vital in the event of a disaster such as a house fire, God forbid.  I have a microserver backing up all my electronic documentation, but there’s nothing that’s offsite in case of such a thing.

What I really need is some way to get my critical documentation and other files out of the house and offsite.  It can just be a simple mirror, since the only purpose of it is for disaster recovery, in the event of the Microserver being unusable (or a lump of slag).  And since it’s very, very likely that such a thing will never be used, it needs to be really cheap.

Enter Amazon Glacier.  Glacier is Amazon’s off-site backup and archival solution.  Dirt cheap storage, but it can cost a fair bit to restore.  I’m not expecting to ever need a restore, so that doesn’t fuss me.  But there’s no good APIs for it yet, and I’m impatient.  Amazon plans on integrating Glacier into Amazon S3 as part of the lifecycle management.  This means that for me, S3 is probably the right thing to look at, since I don’t have all that much data to store and I can then use existing commands.

Signing up with Amazon Web Services

Go to aws.amazon.com and sign up.  You’ll want to pay very careful attention to the costings.  Very careful.  As of now, the costing for storage for the first Tb in the US Standard zone is US$0.093 per Gb per month (at the Reduced Redundancy rate).  This is US$22.32 a year for 20Gb.  Costings for Glacier will be dramatically reduced.

In addition to this, 1,000 PUT, COPY, POST or LIST requests will cost you $0.01 .  This means that backing up enormous numbers of tiny little files may cost you more than you think because of the PUT requests.  GET requests are $0.01 per 10,000 – but for this purpose they’re not really relevant.

Data transfer IN to S3 has no charge.  Data transfer OUT of S3 is $0.12 per Gb after the first Gb per month up to the first 10Tb.  For this purpose, free IN transfers are great, and the cost of the OUT transfers doesn’t matter much since you won’t be pulling data out from it all the time.

Once you’re in, you’ll want to configure notifications using CloudWatch.

Configuring an S3 Backup Upload User

In the AWS Console, click Services, then IAM.  Click Users, then Create New Users.  Follow the prompts and then make sure you copy down the Access Key Id and Secret Access Key for your new user.   What we’re doing here is creating a user who only has access to S3.

Next, click Groups, Create New Group, and follow the prompts.  You want to create a group using the Amazon S3 Full Access template.  Then, go back to the Users tab, right click on your new user, and assign them to the new group you created.

This user now has full access to the S3 component of your AWS account, but nothing else.

Installing and configuring S3CMD

I have CentOS, so since I already have the EPEL repositories set up, I just did;

sudo su -
yum install s3cmd
s3cmd --configure

Follow the prompts when configuring s3cmd.  You will want to use encryption, and you’ll want to use HTTPS.  Note that while s3cmd encryption is not supported with the sync command, it is supported with put, so it’s handy for other things.  This will create a config for s3 for the current user, so you should do it as root.

Now, you need to create a bucket to store your data in.  Do something like this;

s3cmd mb s3://OffsiteMirror-your.domainname.here
s3cmd ls

You should see the name of the bucket you just created dropped to the console.  You’ve now got a target to store your offsite mirror to.

Plan your Sync

By default, S3 storage is private.  But it can be made public.  For this reason, create multiple buckets so your private and public data is separate, and don’t make your offsite mirror public!

Keep in mind that the data you’re about to mirror will be sent to S3 and will sit on Amazon’s servers unencrypted.  Plan your behaviour accordingly.

Lastly, plan out what you want to sync.  We’ll assume you’re syncing the folder /data/ImportantStuff to S3.

Syncing a Directory

Run this (or put it in a CRON job or whatever);

s3cmd sync --rr --delete /data/ImportantStuff/ s3://OffsiteMirror-your.domainname.here/data/ImportantStuff/

Now, the contents of that folder will be synced up to your S3 bucket, and the folder structure will be conserved inside that bucket.  You will automatically delete files in the bucket prefix that do not exist in the source data.  Note, that this means that anything in /data/ImportantStuff/ that is not in the source will be killed!  The –rr option enforces the use of Reduced Redundancy mode on S3 for reduced costs.

Conclusions

So far, for what I need, S3 seems to be a workable solution.  I’ve also got some vital security data being pushed up to S3, but being encrypted first.  Once Glacier integration comes around, I’ll be sorted, and can have most of this stuff pushed over to Glacier.

Once I’m more aware of how it’s going, I’ll probably push all the family photos and stuff like that across to it to make sure we have an offsite clone in case of a big disaster.

BackupPC – Enabling on Windows 7

By default, Windows 7 will not allow access to the C$ administrative share, so you’ll have trouble getting BackupPC to work.  To solve this, run up Registry Editor and set the following registry key;

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\LocalAccountTokenFilterPolicy

To the REG_DWORD (32-bit) value of 1.

Source:  http://www.howtogeek.com/howto/windows-vista/enable-mapping-to-hostnamec-share-on-windows-vista/

BackupPC for MythTV

I was having some trouble getting BackupPC to back up my MythTV’s operating system to my Microserver.  I kept getting aborts all the time from broken pipes and such.

Got fatal error during xfer (aborted by signal=PIPE)
Anyway, the solution turns out to have been here.  Disable TCP segmentation offloading on the client end, and now it all works!

Backup via rsync

Since my Microserver keeled over and died and I didn’t have a good backup in place, I was in search of a quick and easy way to back up the USB key on a regular basis so I could recover easily.

The first thing I did was to run a daily cron job to tar up the whole USB key and shove it onto rust somewhere.  This went OK, but it wasn’t very elegant.

So, enter the below script.  I got this off a work colleague of mine, and adapted it slightly to suit my purposes…

#!/bin/sh
#
# Script adapted from backup script written by David Monro
#
SCRIPTDIR=/data/backups
BACKUPDIR=/data/backups
SOURCE=”/”
COMPLETE=yes
clonemode=no
while getopts “d:c” opt
do
        case $opt in
                d) date=$OPTARG
                ;;
                c) clonemode=yes
                ;;
        esac
done

echo $date
echo $clonemode
if [ “$clonemode” = “yes” ]
then
        SOURCE=”$BACKUPDIR/current/”
        COMPLETE=no
fi

mkdir -p $BACKUPDIR/incomplete \
  && cd $BACKUPDIR \
  && rsync -av –numeric-ids –delete \
    –exclude-from=$SCRIPTDIR/excludelist \
    –link-dest=$BACKUPDIR/current/ \
    $SOURCE $BACKUPDIR/incomplete/ \
    || COMPLETE=no
if [ “$COMPLETE” = “yes” ]
then
    date=`date “+%Y%m%d.%H%M%S”`
    echo “completing – moving current link”
    mv $BACKUPDIR/incomplete $BACKUPDIR/$date \
      && rm -f $BACKUPDIR/current \
      && ln -s $date $BACKUPDIR/current
else
    echo “not renaming or linking to \”current\””
fi

What this will do is generate a backup named for the current date and time in the specified location.  This script will find the latest current backup, and will then generate a new backup hardlinked to the previous one, saving a LOT of disk space.  Each backup will then only contain the changes made since the last one.
After running this for a while (about a month), I’ve now got a 4.6Gb backups folder, with a 1.7Gb base backup – so a month’s worth of daily backups has only take up double the space of a single backup.  Note however that there’s been some inefficiencies around updatedb that has blown more disk space than otherwise should be the case.
In order to check the size of each backup, just do a “du -sch *” in the folder your backups are in.
A very important safety tip.  Since each file is generate through hardlinks, do not under any circumstances try and edit files in the backups.  Let’s say you haven’t changed /etc/passwd in a long time.  While it looks like you have 30 copies of it, you actually only have one (ie, the hardlink).
If you go and edit /etc/passwd, you will change it on every backup at once, effectively.  So don’t do that.  It’s safe to just flat-out delete a backup, you won’t trash anything.  Just don’t edit things.

HP Microserver – UpdateDB Bloat

A brief discovery…  If you use a script to back up your Microserver to a mount point somewhere in your filesystem, then your updatedb database will keep growing and growing without bound.  This is bad if you’re using a USB key for your root filesystem (I went from writing a 1.5Mb updatedb database once a day to writing a 60Mb one before I caught it).

The solution to this is to edit /etc/updatedb.conf and add the path to where your backups are stored to the PRUNEPATHS option.