ENVI-R – Putting the last pieces together

In my last post about the ENVI-R setup, I discussed setting publish-envir.pl in order to parse raw serial data from the ENVI-R and post it out to MQTT channels.  In this post, we’ll discuss how to get THTTPD and MRTG going, in order to actually get some useful graphs out there.

Glue Scripts

MRTG can’t use data direct from MQTT, so we need a fairly simple Perl script to pull the MQTT data out and hand it to MRTG in a useable format.  An MRTG “packet” is of the following format;

A response must have four and exactly four lines.  So, our glue script has to haul in data from MQTT and then output four lines for MRTG to use.

Have a read of mrtg-envir.pl at my GoogleCode repository.  The code there should be fairly self-explanatory.  The only complex bit is CalculateReading, which is a horrible bit of code that tokenizes and parses your input so you can do things like “return the first sensor subtract the second sensor” with “0.1-0.2” and stuff like that.

It’s important to note that the script will wait until MQTT publishes a message.  This means that if your publish-envir.pl script isn’t running, or the ENVI-R isn’t working or something, then MRTG will also be held up.


Setting up THTTPD is very, very easy.  From Ubuntu, just do this;

sudo aptitude install thttpd
service thttpd start

And that’s about it.  By default, it’ll share out stuff in /var/www, and has CGI turned off.  You don’t need CGI in the short term.  By default it’ll also have chroot enabled and will be reasonably secure.  All it can really do is pass out pages, but it has a tiny memory footprint and is very fast.  And for the basic cron-driven MRTG setup, that’s all you need.

In order to test, just create something like /var/www/index.html and put the text “Hello world” into it, and make sure you can fetch it.

MRTG Setup

MRTG is also very straightforward to set up.  Just sudo aptitude install mrtg to install it.  By the default setup, MRTG will not run as a daemon, it’ll run as a cron job executed every 5 minutes.

A warning about MRTG.  MRTG isn’t very scalable – you’ll want to use rrdtool or cacti if you want something big.  But for something simple, easy to set up, and only for a few samples, MRTG does the job quite nicely.

Have a look at my example mrtg.cfg at my GoogleCode repository.  Drop that into /etc/mrtg.cfg and run the following commands and you’re sorted;

sudo su –
mkdir /var/www/mrtg
indexmaker –output=/var/www/mrtg/index.html /etc/mrtg.cfg
env LANG=C /usr/bin/mrtg /etc/mrtg.cfg


If you now look at the contents of /var/www/mrtg, you should see a number of files, images and the like.  Pop open your web browser and browse to http:///mrtg and you should see some graphs!

Final Notes

One thing you’ll notice is that the publish script will accumulate a 5-minute moving average, which happily lines up nicely with the sample interval for MRTG.  But the calculated averages for the weekly and monthly graphs are calculated by MRTG, and will often miss out spikes in usage which you may want to see.  Additionally, the linear vertical scale can miss out low levels of base load or make the hard to see  Consider using logarithmic for the vertical scale.

You’ll also notice in my GoogleCode repository a very simple CGI script for dumping out the instantaneous data from envir-last.  You’ll need to enable CGI on THTTPD for this to work.

All in all, the setup was a good bit of fun and some problem solving, and now I’m collecting some useful electricity data.  Some observations are that careless use of lighting burns a surprising amount of power, and that the aquarium heater for our freshwater turtle was turned up too high and was blowing a lot of power as well.

I’ll need to do some baseload analysis of devices on standby, because I still feel that the base load power use is way too high.

ENVI-R – Data Broker Setup

Continuing my previous post on this topic, I realized I needed a message broker in order to permit me to have a daemon polling the ENVI-R for data, processing it, and then making it available in a palatable form for various scripts and for MRTG.  It would have been possible for me to simply have a cron job dump a summary table to disk every few seconds, but since my Linux box uses a CompactFlash card for disk storage, that would quickly kill the flash chips.  I needed something that held the messages in memory only.  That’s where a MQTT message broker steps in.

One of my colleagues at work put me on to MQTT, and in particular he put me onto IBM’s Really Small Message Broker (RSMB).

RSMB is a really tiny implementation of an MQTT compliant message broker.  When they say tiny, they aren’t kidding – the broker is a 78k executable which requires an included 73k library.  (config file is optional but a good idea).

Setup Notes

Given that RSMB is such a tiny thing, setting it up is very easy.  However, I did a few extra steps to make it a little more secure and integrated;

  • Dump broker (the executable) into /usr/local/bin.
  • Do the same with stdoutsub and stdinsub .
  • Copy libmqttv3c.so to /lib as libmqttv3c-1.2.0.so and then softlink libmqttv3c.so to it.  This conserves the normal sanity with libraries that ld.so expects, and it should be a bit more maintainable.  Probably not the best way to do things, but it works.
  • Create a new user with no special rights named broker.  After creation, edit /etc/shadow (there’s probably a better way to do this) and change the second field on the broker line to *.  That ensures nobody can log in using the account.
  • Create a config file in /usr/local/etc/broker.cfg (config text follows).
  • Create a new directory in /var/local/broker .  chown that directory to broker:broker.
  • Create a new /etc/init.d/broker script and add it to startup with update-rc.d .

The config file contents are;

# config file for IBM’s RSMB (Really Small Message Broker)

port 1883
max_inflight_messages 500
max_queued_messages 3600
persistence_location /var/local/broker/

The init.d script is;

# Provides:          broker
# Required-Start:    $network $local_fs
# Required-Stop:     $network $local_fs
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Should-Start:      broker
# Should-Stop:       broker
# Short-Description: start really small message broker (broker)

pushd /usr/local/bin
su -c “nohup /usr/local/bin/broker /usr/local/etc/broker.cfg >> /dev/null &” broker

Testing the Install

Broker comes with a couple of other binaries, which can be used to publish messages to a channel, and also to view messages on a channel.  Testing broker is quite easy.

  • Start up broker either manually (good idea the first time) or with the init.d script.
  • In one window, run stdoutsub example to start listening on channel “example”.
  • In another window, run stdinpub example and then start typing in lines.  You should see each “message” you enter appear on the stdoutsub window.
  • If you do, great.  If you don’t, verify that the ports are all OK and opened on your firewall (if any).

Next Steps

Right, getting the broker up provides the underpinning that gets used by the rest of the software that I’m discussing

From CPAN, you’ll need to go and install Websphere::MQTT – that’s the CPAN module for interfacing with IBM Websphere’s MQTT implementation.  But because it’s MQTT compliant, it’ll work just fine with RSMB.  That module will be used fairly heavily in the other scripts.  You’ll also need Device::SerialPort and Clone , for other parts of the scripts.

Followup posts will be outlining how to set up THTTPD and MRTG, and then tying it all together with Perl and Bash glue.

ENVI-R & MRTG – Overview

I recently had an ENVI-R wireless power monitor installed, and I set it up to record data to an always-on Ubuntu Linux box I have sitting around using MRTG.  The setup required a fair bit of scripting in Perl, Bash, and a couple of extra bits of software.

This post is the first in a series outlining just how I set up the monitor and what was required.

The ENVI-R and accessories

I bought the ENVI-R unit in a pack which included the transmitter, a receiver LCD display, power supply and one current clamp.  I also bought an additional two current clamps and the USB connector cable for the receiver.

The USB connector is actually a specially wired Prolific PL2303 USB to RS232 serial port adapter, with an RJ45 connector on the end.  I don’t know the exact pinout, but that’s not required.  Anyway, the adapter “just works” with Ubuntu 10.10.

Each transmitter unit can handle up to three current clamps, and one receiver can handle up to ~10 transmitters.  I didn’t want to buy additional transmitters, and there isn’t a huge amount of space in my switchboard, so I just got an additional two current clamps bringing me to three.  That allows for the monitoring of up to three loads.

Clamp Installation

WARNING – Mains voltage can be lethal.  In addition, tampering with your switchboard without an electrician’s license may be illegal, as well as dangerous.  Get an electrician to install the clamps for you.

The clamps themselves are no-contact types, and go around the active wire of whatever feed you want to monitor.  They should be clamped entirely around the wire so that the ferrite core of the clamp encircles the active wire.  The clamp operates by picking up the EM field around the active wire as current passes.

Be aware that the clamp cannot identify the direction that current is flowing.  This isn’t a problem in the case of a house like mine that has no solar power, but if you have solar power you’ll want to install a clamp onto the feed coming from the solar cells, and then put your main power clamp after the location where the solar power feed connects to your main feed.

In my case, my electric hot water is on a different circuit from the main power, so the three clamps were connected as so;

  • Main power:  Clamp connected after main breaker, between breaker and switchboard.  Registers all power going into the house (except hot water)
  • Lights:  Clamp connected after breaker for the lights (I only have one).
  • Hot water:  Clamp connected after breaker for the hot water.

Be aware as well that the ENVI-R assumes if you have multiple clamps connected to one transmitter that you’re measuring 3-phase power, and it therefore just adds together all the currents.  In the case of mains + hot water that’s OK, but since my “main” clamp is actually registering the sum of lights and everything else, it’ll over-read if the lights are on.  For that reason, if you’re setting up like me, don’t trust the display, use the serial data feed.

ENVI-R Receiver Installation

The receiver is pretty straightforward, and it’s just plugged in, and then the USB serial cable is hooked up.  I’ll run through the software to actually interpret the data in a useable format for MRTG later.

The ENVI-R communicates at 57600 8N1 speed.  If you want to just see the raw output, run this command from the command line;

$ stty -F /dev/ttyUSB0 speed 57600
$ cat /dev/ttyUSB0

You should see, about once every five seconds, output something like this;


If you do, great.  The ENVI-R is all hooked up.  Note that it also monitors temperature, and that temperature is the temperature of the receiver.  Note, if you don’t have a transmitter attached, you won’t see any output from the ENVI-R at all.

The Software

For this kind of thing, most people seem to use Cacti.  While it’s definite that Cacti can do more, the box I’m using has very limited memory, so I need to keep the number of running daemons to an absolute minimum.  MRTG can run as a Cron job, and doesn’t require a database daemon to be running as well.  For the small number of graphs I’m talking, Cacti is overkill.

In order to pass data to MRTG, I used a message broker written by IBM (Really Small Message Broker), since it does the job and it’s tiny.  Then I wrote a couple of Perl scripts to handle converting the raw ENVI-R data into moving averages for MRTG to slurp up.  So, the software required is;

  • IBM’s RSMB or other MQTT-compatible message broker
  • MRTG for generating the graphs
  • A web server of some type.  I used THTTPD, since it’s very small and fast.
  • Perl.  It’ll come with Ubuntu, but you will need a number of CPAN libraries.

Up Next

If you get to this point, you should have a connected up ENVI-R with a few clamps, and it should be hurling data out to /dev/ttyUSB0 on your Linux recording box.  Fantastic.

Now for the software….

PowerCLI: Get-VMSize

I’m employed as a server admin, with most of my time spent working with VMware and managing a reasonably sized fleet of machines.  As such, I have a range of various Powershell scripts I’ve written to take advantage of VMware’s PowerCLI interface for Powershell.  PowerCLI is, in a word, great.  It provides some pretty good in-depth insight to what’s going on in vCenter, and since it ties into Powershell, it’s easy to script up whatever you want to do.

Anyhow, below is a filter script, which was cobbled together from some code from a source I can’t recall (if you know the original source, let me know so I can attribute it properly!).  The purpose of this script is to take a bunch of VMs, calculate the Size and Used disk space of those VMs, and dump that out.

Begin {


Process {
    $vm = $_

    $report = $vm | select Name, Id, Size, Used
    $report.Size = 0
    $report.Used = 0

    $vmview = $vm | Get-View
    foreach($disk in $vmview.Storage.PerDatastoreUsage){
           $dsview = (Get-View $disk.Datastore)
        $report.Size += (($disk.Committed+$disk.Uncommitted)/1024/1024/1024)
        $report.Used += (($disk.Committed)/1024/1024/1024)

    $report.Size = [Math]::Round($report.Size, 2)
    $report.Used = [Math]::Round($report.Used, 2)

    Write-Output $report

End {


 An example of its use follows;

Get-VM | .\Get-VMSize.ps1 | Measure-Object -Property Size -Sum

The input should be a VMware.VimAutomation.ViCore,Impl.V1.Inventory.VirtualMachineImpl object, such as what gets returned by Get-VM.  Each output object from the script will contain the following fields;

  • Name – The text name of the VM as given by the Name field in the input object.
  • Id – The ID of the VM as given by the Id field of the input object.
  • Size – The sum of the sizes of all disks used by the VM.  This size is the figure set when provisioning the disk, not the actual on-disk allocation (ie, it will be bigger than the disk use if you are using thin provisioning).
  • Used – The amount of actual disk space being used by all disks on the VM.  This size is smaller than Size if the VM is thin provisioned.


WIDCOMM Bluetooth = Crap

So, I got hold of an i-Mate Smartflip smartphone for work, and wanted to configure it for ActiveSync over Bluetooth. I’ve got a Compaq nc8430 laptop. Here’s where the fun begins.

In a nutshell, ActiveSync doesn’t work using the default WIDCOMM Bluetooth stack that comes with the HP. And the Bluetooth chip that’s on the nc8430 doesn’t work with Microsoft’s Bluetooth stack (that comes with Windows XP SP2).

That’s not entirely true. The chip is completely compatible with Microsoft’s Bluetooth driver and stack. It’s just not in the INF file, so the MS driver won’t install. To fix, we crack out Notepad, open up C:\WINDOWS\INF\BTH.INF, and add in a line for the transceiver, as follows;

“HP USB BT Transceiver [1.2]”= BthUsb, USB\Vid_03F0&Pid_0C24
“HP Integrated BT Transceiver [2.0]”= BthUsb, USB\Vid_03F0&Pid_171D

The bolded part is the bit you add. This then allows the Microsoft Bluetooth driver to think that the BT chip in the nc8430 is compatible, and then allows you to install it (when you select Advanced and pick the driver). When you go and install the driver again, the Microsoft Bluetooth stack installs itself and you’re rolling.

Once that’s done, create an incoming COM port on your laptop. Then turn on discovery, and in ActiveSync on the phone, tell it to connect via Bluetooth, and follow the instructions to pair. Fixed.

Of note, the Microsoft stack doesn’t seem to work too well with the Nokia 6230i. You need to use the WIDCOMM stack for that.

Fixing W32Time in a Guest OS

NOTE: For informative purposes only. I take no responsibility at all for any harm that may result to your environment as a consequence of this information. Use at your own risk, and research appropriately!

Sometimes you must run W32Time on a guest OS, but it’s not a good idea to run it at the same time as using VMWare Tools time synchronization. A good example of this is a domain controller – it must have W32Time running, must have accurate time, and must supply time to member servers.

First, a note. Don’t just go and point your PDC at some dummy NTP source that doesn’t exist. If you do that, after some period W32Time will just shut down and stop serving time. Instead, we need to find a way to get W32Time and VMWare Tools to co-exist peacefully.

The solution is to set W32Time so it only tries to slew the clock very occasionally, so the adjustments made by VMWare Tools dominate the clock and keep it in sync. Since W32Time is still in contact with a valid time source, it doesn’t commit seppuku.

You can do this by changing this registry key to “Weekly”. Data type is REG_SZ, if you need to create it;


Restart the W32Time service when you do that, and then all should be well. Oh yeah, and after giving it a little while to settle down, don’t forget to check your event viewer, and do a;

w32tm /stripchart /computer:ANOTHER_DC

to check that your machine is still in sync with the rest of your domain.

VMWare & the System Clock

The system clock behaves … strangely … under VMWare in a guest OS. It tends to run slow, and the amount by which it runs slow varies wildly from day to day. Without a fixed time synchronization source, guests will quickly fall out of synchronization and time-critical mechanisms such as Kerberos will break.

This happens because the guest OS assumes that there is a constant period of time between instructions – ie, it has all of the processor’s attention. The system clock is a high-precision clock maintained by this fact. When you put a guest OS into a virtual environment this goes out the window – the guest no longer has a consistent period of time between instructions, in general this time is longer than expected. Therefore means that individual ticks of the system clock take longer than the fixed (outside) time periods they should, and the system clock runs slow.

So, how do we stop this? Well, one solution is to use a guest-internal NTP client such as W32Time. This is a very bad idea. NTP clients adjust the system clock by slewing (speeding up or slowing down) the clock progressively so they can converge the system clock with NTP time. Since the system clock is running at an unpredictable rate, slewing the clock is a recipe for disaster – it causes the clock to swing to and fro and never stabilize. An unstable clock can cause very strange things to happen.

We could also just keep setting the time as we need to. This is also a very bad idea. Modern OSes rely on the continuity of time. If you just keep resetting the clock, the system ‘loses time’, and tasks that were supposed to run in the lost time just don’t happen.

The solution is to let VMWare Tools handle the problem, and check the box that lets it synchronize with the host. When you do this, VMWare Tools slews the clock appropriately so it doesn’t break anything, and your time converges as you’d expect. When you do this, you must turn off any other kind of time sync software such as W32Time, otherwise they will fight over the clock and much havoc will ensue.

There are certain times when you must run an NTP time server such as W32Time (such as on a domain controller). How you go about preventing W32Time and VMWare Tools fighting is an issue for another post.

NTP Time Synchronization in VI3

Time synchronization is of critical importance in a VMWare infrastructure. If it goes wrong, all hell breaks loose, especially in a Windows 200x environment using Kerberos.

Due to how system clocks work in VMWare, this means that you need to use VMWare Tools’ sync capability to keep your VMs right on time. This means that all your hosts need to be properly synchronized.

So, how do you do this? If you’re using any fancy deployment solution like Altiris, disable time synchronization in it. Why? Because if you don’t, you’ll forget six months down the track, virtualize your deployment solution, and then wonder why all your clocks go crazy.

Read this article and implement it. That’ll get your NTP daemon sorted out, but that’s not quite enough. You need to get your machine’s system clock and hardware clock in sync before NTP can slew the clock and keep it synchronized.

In order to do that, get into a console on your VI3 server, and do the following (I assume that firewall.contoso.com is one of your NTP sources, change to suit);

service ntpd stop
ntpdate firewall.contoso.com
hwclock –systohc
service ntpd start
watch “ntpq -p”

That will configure your system and hardware clocks to be close to the NTP source you named, and then start a watch process showing you the state of your NTP peers.

After a while, you should see an asterisk appear next to one of the peers (not LOCAL, that’s your host’s internal clock). When that happens, you’re all good.

Making a transform for any MSI installer

Many MSI installers will let you generate unattended installs, using command-line arguments, but they may not permit the use of a standard transforms (MST) file to make the unattended install. This is a major problem if you are attempting to deploy software via GPO, since you can’t specify a command line.

There’s a way around this, though. Go and get the Windows 2003 SP1 Platform SDK, and install ORCA from it. The SDK is a big download, but c’est la vie.

Once you’ve got ORCA up and running, make a copy of the MSI file you’re customizing (we’ll call them install.msi and install-cust.msi). Then open up install-cust.msi in ORCA. You will see a VAST number of tables. Don’t worry about them too much. Go find the Property table.

Editing the copied MSI

Now, when you use command-line arguments, what actually happens is the MSI inserts those into the Property table when it runs. So, let’s say you needed to add a TARGETDIR=c:\ argument into the Property table. Go look for the TARGETDIR property, and if you find it, edit it. Otherwise add it by right-clicking on the right-hand pane and clicking Add Row. Enter the values as appropriate. When you’re done, save and close ORCA.

Generating the transform MST

From a command prompt, get to a directory that has the two MSI’s in the same location. What we’ll run here is msitran.exe, a Microsoft tool that came with the SDK that generates an MST that’s the diff of two MSI’s.

Run the following command, and you’ll get a transform named install.mst;

“c:\program files\microsoft platform sdk\bin\msitran.exe” -g install.msi install-cust.msi install.mst

Voila! You now have an MST for your original MSI that incorporates the changes you wanted!

Manually running the MSI with the MST

In order to test deploy, you just run the following command. That runs the MSI, applies the MST you created, and does so in basic mode (which is what you’d typically use in an unattended install);

msiexec /i install.msi TRANSFORMS=install.mst

Assuming that works fine, go ahead and deploy via your method of choice.