Zigbee2MQTT on RPI with CC2530/CC2591

So, if you’ve flashed a CC2530/CC2591 from my previous post, you now probably want to get it talking to something. Here’s how you can do that.

Assumptions

I will assume you are wanting to do the following;

  • Use a Raspberry Pi 2/3 running Raspbian to act at the bridge between the CC2530/CC2591 and MQTT.
  • You’re using Raspbian Stretch.
  • You want to use Zigbee2MQTT to get this thing talking to something like HomeAssistant.
  • You want to use the RPI’s built-in UART and directly wire the module to the RPI.
  • You don’t care about Bluetooth on the RPI.
  • You already have Docker installed on the RPI.
  • You already have Docker-Compose installed on the RPI.
  • You already have your CC2530/CC2591 flashed with KoenKK’s Z-Stack firmware, and it’s a recent version.
  • You have a mqtt server somewhere already.

Phew. Now with all that in line, let’s get moving.

Hardware Setup

CC2530/CC2591 Pinout Diagram
RPI2/3 GPIO Pinout

Using the two charts above, you will need to make the following connections;

PIN PURPOSECC2591RPI PIN
VCC (Supply)VCC3V3
GND (Ground)GNDG
RXD (Receive Data)P0_2RPI Pin 8 (TXD)
TXD (Transmit Data)P0_3RPI Pin 10 (RXD)

This is the minimum set of pins required. Note that RXD on the CC2591 gets connected to TXD on the RPI. This is normal.

Do not connect the CC2530 to the 5V lines on the RPI. Doing so will likely destroy the CC2530.

Configure UART on the RPI

What we’ll be doing is using the UART on the RPI. There are a number of ways to do this, but we’ll use the method which disables Bluetooth and puts the UART on the high-performance device /dev/ttyAMA0.

Edit your /boot/config.txt and add the following;

dtoverlay=pi3-disable-bt

Then edit /boot/cmdline.txt and remove the following bit from the only line;

console=serial0,115200

Reboot your Pi, and you should now have the UART pins listed above manifest on /dev/ttyAMA0. Don’t try and use minicom or similar to connect to it, you won’t see much useful.

Set up a HomeAssistant Docker-Compose File

We’re going to use Docker Compose to run zigbee2mqtt in a container. Make a directory somewhere for it, and a data directory, like so;

mkdir -p /srv/zigbee2mqtt/data

Then edit /srv/zigbee2mqtt/docker-compose.yml, and fill it in like this;

version: '2'
services:
zigbee2mqtt:
image: koenkk/zigbee2mqtt:arm32v6
restart: always
volumes:
- /srv/zigbee2mqtt/data:/app/data
devices:
- /dev/ttyAMA0:/dev/ttyAMA0

Now, this will spin up a zigbee2mqtt service when you start it, which will always restart when stopped, using /dev/ttyAMA0 as we defined earlier. Lastly, create a /srv/zigbee2mqtt/data/configuration.yaml and fill it in like this;

homeassistant: true
permit_join: true
mqtt:
base_topic: zigbee2mqtt
server: 'mqtt://YOURMQTTSERVERHERE:1883'
include_device_information: true
serial:
port: /dev/ttyAMA0
advanced:
log_level: info
baudrate: 115200
rtscts: false

I strongly suggest you change the network key, and disable permit_join when you have all your devices up. There’s various other things to do here too, but this should get you started.

Once that’s done, a simple;

docker-compose up

Should bring up your container. Press Ctrl-\ to break out without terminating it.

Flashing Z-Stack on a CC2530+CC2591 using a Wemos D1 mini

I’m messing about with Zigbee for a comms protocol to various temperature sensors, and this requires a Zigbee Coordinator. There’s a few ways of doing this, but ultimately I settled on a zigbee2mqtt bridge and a cheapie AliExpress CC2530+CC2591 module.

This module incorporates an RF amplifier, but does not have the normal debug header that the CC2530 Zigbee transceivers have and also lacks the USB-TTL adapter chip. Not a problem if you’re using a RPi as the bridge, which is what I plan on doing.

However first, you need to get Z-Stack firmware on it, so you can use it as a coordinator. This proves to be… non-trivial. Especially if you want to use a Wemos D1 Mini as the flashing device (these Wemos things are really good, incidentally).

First Steps – Getting CClib-Proxy onto the Wemos

Assuming you have a Wemos D1 mini, your first steps are to install the Arduino IDE (available from the Windows Store). Once that’s in, in Preferences, add the following URL to the Additional Boards Manager URL field;

http://arduino.esp8266.com/stable/package_esp8266com_index.json

From there, you should now be able to go to the Boards Manager, and install the esp8266 package. Once that is installed, configure your board as a “LOLIN(WEMOS) D1 R2 & Mini” and select the correct COM port.

Now it’s as simple as downloading CCLib-Proxy from this link. Open up CCLib_Proxy.ino, then change the following lines for the pinout;

 int CC_RST   = 5;
int CC_DC = 4;
int CC_DD_I = 14;
int CC_DD_O = 12;

These mappings are required. Upload to your device. You now have CClib-Proxy onto the Wemos and ready to go.

Wiring up the Wemos to the CC2530+CC2591 Module

You will need to map various pins on the Wemos to pins on the module, using the following chart;

PIN PURPOSENUMBER ON CC2591NUMBER ON WEMOS
DC (Debug Clock)P2_2D2 (GPIO4)
DD (Debug Data)P2_1D5 (GPIO14)
D6 (GPIO12)
RST (Reset)RSTD1 (GPIO5)
VCC (Supply)VCC3V3
GND (Ground)GNDG
RXD (Receive Data)P0_2RPI Pin 8 (TXD)
TXD (Transmit Data)P0_3RPI Pin 10 (RXD)
CTS (Clear To Send)P0_5RPI Pin 11 (RTS)
RTS ( Ready To Send)P0_4RPI Pin 36 (CTS)

When using a Wemos as the flashing device, it’s safe to tie the two I/O pins together (D5 and D6) and connect them to the DD pin on the CC2530. It works fine. The P0_2 through P0_5 pins are used when you’re using the finished device, not when flashing (so you don’t need to connect them up).

Pinout for CC2530+CC2591 module

The above diagram shows the pin mappings on the CC2530+CC2591 module itself. Follow those numbers and the pins above to wire it up.

Pinout of Debug Header on CC2530 (not present on combined module)

This diagram shows the pinout of the debug header (which is not present on the CC2591). However, it does show which pins on the CC2591 marry up to what purposes on the debug header (which correspond to pins on the Wemos).

After this is done, you need to use CClib to flash the firmware.

Flashing the Z-Stack Firmware

Get the firmware from this link. You will also need to install Python 2.7 for your system. Once that’s done, install pyserial with;

pip install pyserial==3.0.1

Edit the firmware .hex you downloaded, and remove the second to last line (it won’t work with the flasher you’re using). Once that is done, connect your Wemos to your computer, and then from the Python directory in your CClib download, run;

python cc_info.py -p COM9

Assuming that COM9 is your Wemos. You should see output giving you data on the CC2530. If so, fantastic. Now flash it;

python cc_write_flash.py -e -p COM9 --in=YOURFIRMWAREHERE.hex

This will take an extremely long time. Several hours. But you should see progress fairly quickly. Just hang tight. Once that’s done, you have a coordinator!

Next post will deal with testing the coordinator out.

References and Links

AussieBroadband CheckMK Plugin

I’ve recently changed my ISP to AussieBroadband.  Since I’m now working under a quota, I want a way to monitor my quota in CheckMK.  Enter a bit of python.  If you want to use this, you’ll need to adjust the hashbang to suit your actual OMD site, and then pass a parameter which is your username:password to get onto your ABB account.

#!/omd/sites/checkmk/bin/python2.7
#
# Parses AussieBroadband Quota Page to generate a CheckMK alert and stats pages
#

import requests
import re
import time
import sys
import json

status = 0
statustext = "OK"

try:
creds = sys.argv[1].split(":")

# Create a new session
s = requests.Session()

# Process a logon
headers = {
'User-Agent': 'abb_usage.py'
}
payload = {
'username': creds[0],
'password': creds[1]
}
s.post('https://myaussie-auth.aussiebroadband.com.au/login', headers=headers, data=payload)

# Fetch customer data and service id
r = s.get('https://myaussie-api.aussiebroadband.com.au/customer', headers=headers)
customer = json.loads(r.text)
sid = str(customer["services"]["NBN"][0]["service_id"])

# Fetch usage of the first service id
r = s.get('https://myaussie-api.aussiebroadband.com.au/broadband/'+sid+'/usage', headers=headers)
usage = json.loads(r.text)
quota_left = usage["remainingMb"]
quota_up = usage["uploadedMb"]
quota_down = usage["downloadedMb"]

# Derive some parameters for the check
total = quota_left + quota_up + quota_down
critthresh = 0.10*total
warnthresh = 0.25*total

# Determine the status of the check
if quota_left < critthresh:
status = 2
statustext = "CRITICAL"
elif quota_left < warnthresh:
status = 1
statustext = "WARNING"

# Format the output message
print "{7} - {1} MB quota remaining|left={1};{2};{3};0;{4}, upload={5}, download={6}".format( \
status, \
int(quota_left), \
int(warnthresh), \
int(critthresh), \
int(total), \
int(quota_up), \
int(quota_down), \
statustext)

except:
print "UNKNOWN - Unable to parse usage page!"
status = 3
statustext = "UNKNOWN"

sys.exit(status)

Enjoy.  It’s pretty quick and dirty, but it works.  You put this into your site’s local/lib/nagios/plugins directory, then add it as a classical check.

Stopping DNS leakage with pfSense

I’ve recently changed my core router over from OpenWRT to pfSense.  I was pretty happy with OpenWRT, but I wanted something more powerful since it was running in a VM anyway.

A few days ago, CloudFlare announced their new 1.1.1.1 service.  This is a public DNS service very much like Google’s 8.8.8.8 DNS service, with a notable difference.  It supports TLS.

Why should you care?  Because DNS requests are normally not encrypted, and therefore visible to your ISP to record, use for research / marketing purposes, or even (in the case of some nefarious actors) manipulate or change.  Running DNS over TLS prevents that, by encrypting your DNS traffic so that it can’t be manipulated or collected.

In this post, we’ll be configuring pfSense to do three things – provide a local standard unencrypted port 53 DNS resolver which uses CloudFlare’s 1.1.1.1 encrypted service on the WAN end, and then set up a NAT redirect so any attempts on the internal network to use port 53 DNS servers outside the network instead are intercepted and resolved by the internal resolver.  Lastly, it will also make sure that it blocks any outbound requests to port 53 just to be sure.

NOTE:  There’s one piece here I haven’t figured out yet.  How to pin a cert for the DNS endpoints listed here, so it’s not perfect.  When I figure that out, I’ll edit this post.

Let’s get started.

Configuring the pfSense Local Resolver

In pfSense, go to Services -> DNS Resolver, then put the following block into Custom Options:

server:
ssl-upstream: yes
do-tcp: yes
forward-zone:
    name: "." 
    forward-addr: 1.1.1.1@853
    forward-addr: 1.0.0.1@853
    forward-addr: 2606:4700:4700::1111@853
    forward-addr: 2606:4700:4700::1001@853

You will also need to make sure that the DNS Query Forwarding option is NOT selected, otherwise the above settings will conflict.  It’s OK to set the resolver to listen on all interfaces, since the firewall rules on the WAN will prevent Internet hosts from using your resolver anyway.  Follow the prompts, then test it with something like;

dig www.google.com @yourrouter.local

You should see a resolve against your router’s local DNS resolver that works.  If you really want, use Diagnostics -> Packet Capture, and capture port 853 to verify that requests are being triggered.

Redirect all DNS requests to outside DNS servers to pfSense

Follow the article you can find here.  You will need to do this once for each of your interfaces (in my case, LAN, DMZ, and VPN).  Obviously don’t configure this for the WAN interface.  This then causes any requests to addresses that are not on your internal network to be resolved through the local pfSense resolver (which goes out to port 853 anyway).

To test this, try and dig something against an IP that you know is not internal and is not a DNS server.  It should work, since the request will be NATted.  Something like;

dig www.google.com @1.2.3.4

Assuming that’s all fine, you should now be able to configure a broad block rule to bar all outbound port 53.

Block all outbound non-encrypted DNS

This shouldn’t really be required if the NAT rule is working, but we’ll do it anyway to be sure we’re stopping any DNS leaks.

In pfSense, go to Firewall -> Rules, and for the WAN interface, define a new rule at the top of the list.  This rule should use these settings;

Action: Block
Interface: WAN
Address Family: IPv4+IPv6
Protocol: TCP/UDP
Source: any
Destination: any
Destination Port: DNS (53)
Description: Block outbound insecure DNS

After doing this, verify that you can still resolve against the local resolver (your router’s IP), and that you can still resolve against what seems to be external resolvers (eg, 8.8.8.8).  You should also check that when you do so that nothing passes on the WAN interface on port 53.

If that all passes, you’re done.   It’s up to you if you use the ‘Block’ target or the ‘Reject’ target.  Block causes a simple timeout if something hits 53 (which shouldn’t happen anyway), Reject causes an immediate fail.

Resources

Ubuntu replaces /bin/sh with Dash

Trap for young players.  Ubuntu replaces the default interpreter for /bin/sh from Bash to Dash.  This was done for performance reasons, but certain scripts really don’t like this.  You can easily change it back with;

sudo dpkg-reconfigure dash

Information about this can be found here.  This was done quite a long time ago, apparently, but for whatever reason scripts that ultimately wind up calling the PHP 7.1 interpreter while under Dash break badly under some circumstances (resulting in PHP segfaulting).

 

 

Darktable for Windows using Vagrant

I have an Olympus TG5 camera, which has RAW support for Darktable, but only in the very latest (currently unreleased!) 2.3.0 version.  Since I have Windows, I’ll have to build Darktable directly from source to be able to manipulate it.  Here’s how you can do that.

First, I assume you have Cygwin/X running.  You’ll also need Vagrant installed, along with VirtualBox.  With all that in place, doing the rest is pretty straightforward.  Create a folder, and throw this Vagrantfile into it;

# -*- mode: ruby -*-
# vi: set ft=ruby :

VMBOX = "bento/ubuntu-16.04"
VMHOSTNAME = "darktable"
VMRAM = "1024"
VMCPU = 2

VAGRANT_COMMAND = ARGV[0]

Vagrant.configure("2") do |config|
  # Configure the hostname for the default machine
  config.vm.hostname = VMHOSTNAME

  # Configure the VirtualBox provider
  config.vm.provider "virtualbox" do |vb, override|
    # The default ubuntu/xenial64 image has issues with vbguest additions
    override.vm.box = VMBOX

    # 1gb RAM, 2 vCPU
    vb.memory = VMRAM
    vb.cpus = VMCPU

    # Configure vbguest auto update options
    override.vbguest.auto_update = false
    override.vbguest.no_install = false
    override.vbguest.no_remote = true
  end

  # Mount this folder as RW in the guest, use this for transferring between host and guest
  config.vm.synced_folder "shared", "/srv/shared", :mount_options => ["rw"]

  # Build the server from a provisioning script (which will build Darktable for us)
  config.vm.provision "shell", inline: <<-SHELL
    # Install essential and optional dependencies
    apt-get update
    apt-get install -y gcc g++ cmake intltool xsltproc libgtk-3-dev libxml2-utils libxml2-dev liblensfun-dev librsvg2-dev libsqlite3-dev libcurl4-gnutls-dev libjpeg-dev libtiff5-dev liblcms2-dev libjson-glib-dev libexiv2-dev libpugixml-dev
    apt-get install -y libgphoto2-dev libsoup2.4-dev libopenexr-dev libwebp-dev libflickcurl-dev libopenjpeg-dev libsecret-1-dev libgraphicsmagick1-dev libcolord-dev libcolord-gtk-dev libcups2-dev libsdl1.2-dev libsdl-image1.2-dev libgl1-mesa-dev libosmgpsmap-1.0-dev

    # Install usermanual and manpage dependencies
    apt-get install -y default-jdk gnome-doc-utils libsaxon-java fop imagemagick docbook-xml docbook-xsl
    apt-get install -y po4a

    # Install this for Cygwin/X to work properly
    apt-get install -y xauth

    # Pull the master repo
    git clone https://github.com/darktable-org/darktable.git
    cd darktable
    git checkout master

    # Pull the submodules
    git submodule init
    git submodule update

    # Build Darktable
    ./build.sh --prefix /opt/darktable

    # Build documentation
    cd build
    make darktable-usermanual
    make darktable-lua-api
    cd ..

    # Install Darktable
    cmake --build "/home/vagrant/darktable/build" --target install -- -j2

    # Copy documentation into shared area
    cp build/doc/usermanual/*.pdf /srv/shared/
  SHELL

  # This piece here is run when we use 'vagrant ssh' to configure the SSH client appropriately
  if VAGRANT_COMMAND == "ssh"
    config.ssh.forward_x11 = true
  end

end

Make a shared folder in that folder, and vagrant up followed by vagrant ssh.

Assuming everything is configured correctly, you can then start Darktable with;

/opt/darktable/bin/darktable

And off you go.  You can add some more mounts into the VM as required to share your picture library or whatever with it so you can manipulate it with Darktable.

Converting a bunch of OGG music to MP3, preserving metadata

Quick one.  If you have a heap of OGG music that you want to convert to MP3 format, and also want to conserve the metadata that’s in the music, run this from Ubuntu;

for name in *.ogg; do ffmpeg -i "$name" -ab 128k -map_metadata 0:s:0 "${name/.ogg/.mp3}"; done

Done and dusted!

Adding an RTC to your Raspberry Pi

I use a RPi 3 as a secondary DNS and DHCP server, and time synchronization is important for that.  Due to some technicalities with how my network is set up, this means that I need a real-time clock on the RPi so that it can have at least some idea of the correct time when it powers up instead of being absolutely dependant on NTP for that.

Enter the DS3231 RTC (available on eBay for a few bucks).  The Pi Hut has an excellent tutorial on setting this up for a RPi, which I’m going to summarize here.

Configure I2C on the RPi

From a root shell (I’m assuming you’re using Raspbian like me);

apt-get install python-smbus
 apt-get install i2c-tools

Then, edit your /boot/config.txt and add the following down the bottom;

dtparam=i2c_arm=on
dtoverlay=i2c-rtc,ds3231

Edit your /etc/modules and add the following line;

i2c-dev

Now reboot.  If you do an i2cdetect -y 1 you should see the DS3231 listed as device 0x68.  If you do, great.

Configure Raspbian to use the RTC

After rebooting, the new device should be up, but you won’t be using it yet.  Remove the fake hardware clock with;

apt-get --purge remove fake-hwclock

Now you should be able do hwclock -r to read the clock, and then hwclock-w to write the current time to it.

And lastly, to make it pull time from the RTC on boot, put the following into /etc/rc.local before the exit 0;

hwclock -s

And you can then add a cronjob in /etc/cron.weekly to run hwclock -w once a week.

Done!

Backing up KVM Virtual Machines with Duplicity + Backblaze

As part of my home DR strategy, I’ve started pushing images of all my virtual machines (as well as my other data) across to Backblaze using Duplicity.  If you want to do the same, here’s how you can do it.

First up, you will need a GnuPG keypair.  We’re going to be writing encrypted backups.  Store copies of those keys somewhere offsite and safe, since you will absolutely need those to do a restore.

Secondly, you’ll need a Backblaze account.  Get one, then generate an API key.  This will be comprised of an account ID and an application key.  You will then need to create a bucket to store your backups in.  Make the bucket private.

Now that’s done, I’m assuming here that you have your /var/lib/libvirt library where your VMs are stored on its own LV.  If this isn’t the case, make it so.  This is so you can take a LV snapshot of the volume (for consistency) and then replicate that to Backblaze.

#!/bin/bash

# Parameters used by the below, customize this
BUCKET="b2://ACCOUNTID:APPLICATIONKEY@BUCKETNAME"
TARGET="$BUCKET/YOURFOLDERNAME"
GPGKEYID="YOURGPGKEYIDHERE"
LVNAME=YOURLV
VGPATH=/dev/YOURVG

# Some other parameters
SUFFIX=`date +%s`
SNAPNAME=libvirtbackup-$SUFFIX
SOURCE=/mnt/$SNAPNAME

# Prep and create the LV snap
umount $SOURCE > /dev/null 2>&1
lvremove -f $VGPATH/$SNAPNAME > /dev/null 2>&1
lvcreate --size 10G --snapshot --name $SNAPNAME $VGPATH/$LVNAME || exit 1

# Prep and mount the snap
mkdir $SOURCE || exit 1
mount -o ro,nouuid $VGPATH/$SNAPNAME $SOURCE || exit 1

# Replicate via Duplicity
duplicity \
 --full-if-older-than 3M \
 --encrypt-key $GPGKEYID \
 --allow-source-mismatch \
 $SOURCE $TARGET

# Unmount and remove the LV snap
umount $SOURCE
lvremove -f $VGPATH/$SNAPNAME
rmdir $SOURCE

# Configure incremental/full counts
duplicity remove-all-but-n-full 4 $TARGET
duplicity remove-all-inc-of-but-n-full 1 $TARGET

Configure the parameters above to suit your environment.  You can use gpg --list-keys to get the 8-digit hexadecimal key ID of the key you’re going to encrypt with.  The folder name in your bucket you use is arbitrary, but you should only use one folder for one Duplicity target.  The 10G LV snap size can be adjusted to suit your environment, but it must be large enough to hold all changes made while the backup is running.  I picked 10Gb, because that seems OK in my environment.

Obviously this means I need to have 10Gb free in the VG that the libvirt LV lives in.

Retention here will run incrementals each time it’s run, do a full every 3 months, ditch any incrementals for any fulls except the latest one, and keep up to 4 fulls.  With a weekly backup, this will amount to a 12 month recovery window, with a 3-monthly resolution after 3 months, and a weekly resolution less than 3 months.  Tune to suit.  Drop that script in /etc/cron.daily or /etc/cron.weekly to run as required.

Important.  Make sure you can do a restore.  Look at the documentation for duplicity restore for help.

Splunkd High CPU after leap second addition?

Had my alerting system yell at me about high CPU load on my Splunk Free VM;

A bit of examination revealed that it was indeed at abnormally high load average (around 10), although there didn’t appear to be anything wrong.  Then a quick look at dmesg dropped the penny;

Jan 1 10:29:59 splunk kernel: Clock: inserting leap second 23:59:60 UTC

Err.  The high CPU load average started at 10:30am, right when the leap second was added.

A restart of all the services resolved the issue.  Load average is back down to its normal levels.