22 Dec 2018, 09:35

The Last Kingdom, a brief review

I just finished watching episodes 1-6 of the BBC historical drama The Last Kingdom. I enjoyed the first few episodes but found myself increasingly unsettled as the eposides went by. Something is very ‘off’ about this story. The TV series is based on the books, The Saxon Stories by Bernard Cornwell, so any criticism of the story and script are going to be primarily with the author.

The Good

Great Britain has a fascinating history, and there was much I learnt from watching TLK. I’m much creative license has been taken, but it certainly has piqued my interest and encouraged me to find out more about the real history.

The production values are great, with the locations and sets being very believable. The cinematography is cleverly done, and the music matches the tempo and pathos of the scenes very well. The acting is mostly very good, with some standout performances. I particularly liked the character of King Alfred (played by David Dawson) and Viking Lord Ubba (played by Rune Temte).

With the copious violence, and occasional sex scene aside, I found it pretty thought provoking viewing.

The Bad

The violence is gratuitous and I found disturbing at times. The portrayal of women is not positive. The few female lead characters seem to be nags, pious busy bodies and mostly there for the amusement of the men.

Ulricht is a really unlikeable character. He’s a foolish, arrogant, petulant, man-baby. I hope this is just the story arc that ends up with him growing up and setting things right in the end, but I so wanted to punch him in the face by episode 6.

Ubba by contrast was the bad guy I grew to love, and was sorry to see him meet his inevitable demise.

The Ugly

Humanity always filters history through the lens of the present, and it is no different with TLK.

I couldn’t figure out what moral code the main character, Ulricht, lived by. On one hand he is angrily calling for ‘justice to be done!’ meanwhile he is picking and choosing what is right and wrong based on nothing more than what feels right for him at the time.

The constant mocking of Christianity I found really tiresome. There is zero recognition that western civilisation itself is deeply rooted in its Judeo-Christian heritage. It’s almost as if Christianity was some annoying baggage hindering progress, instead of the driving force behind Britain and Europe’s advance towards what became the pinnacle of human achievement and knowledge.

The pagan beliefs are portrayed through a very modern prism. There is no judgement in their belief system - you just do your best to observe all the right superstitions and pay your respects to the right gods and you’ll be fine - Valhallah awaits. Of course the reality of pagan belief is world of darkness and bondage: human sacrifice anyone?

15 Nov 2018, 08:00

Morning Devotion

“The Lord’s portion is his people.” – Deuteronomy 32:9

How are they his? By his own sovereign choice. He chose them, and set his love upon them. This he did altogether apart from any goodness in them at the time, or any goodness which he foresaw in them. He had mercy on whom he would have mercy, and ordained a chosen company unto eternal life; thus, therefore, are they his by his unconstrained election.

They are not only his by choice, but by purchase. He has bought and paid for them to the utmost farthing, hence about his title there can be no dispute. Not with corruptible things, as with silver and gold, but with the precious blood of the Lord Jesus Christ, the Lord’s portion has been fully redeemed. There is no mortgage on his estate; no suits can be raised by opposing claimants, the price was paid in open court, and the Church is the Lord’s freehold forever. See the blood-mark upon all the chosen, invisible to human eye, but known to Christ, for “the Lord knoweth them that are his”; he forgetteth none of those whom he has redeemed from among men; he counts the sheep for whom he laid down his life, and remembers well the Church for which he gave himself.

They are also his by conquest. What a battle he had in us before we would be won! How long he laid siege to our hearts! How often he sent us terms of capitulation! but we barred our gates, and fenced our walls against him. Do we not remember that glorious hour when he carried our hearts by storm? When he placed his cross against the wall, and scaled our ramparts, planting on our strongholds the blood-red flag of his omnipotent mercy? Yes, we are, indeed, the conquered captives of his omnipotent love. Thus chosen, purchased, and subdued, the rights of our divine possessor are inalienable: we rejoice that we never can be our own; and we desire, day by day, to do his will, and to show forth his glory.

~ Charles Spurgeon

This brought me to tears. Amazing Grace how sweet the sound! What a mystery that God both chooses those who would be His, and yet at the same time makes salvation available to anyone and everyone. It makes no sense in the natural, but God’s ways aren’t our ways.

01 Oct 2018, 12:54

FFmpeg Recipies

Any technology, no matter how primitive, is magic to those who don’t understand it. - Arthur C. Clarke / Mark Stanley

Here are some magical incantations that let me do help me do cool stuff with FFmpeg.

Fade vision + audio between 2 videos

ffmpeg -i big_buck_bunny.mp4 -i blackout.mkv -filter_complex \
[0][1]acrossfade=d=1[a]" \
-map [v] -map [a] result.mp4
  • st=5:d=1 start fade out of first clip at 5 seconds, fade duration 1 second
  • acrossfade=d=1 fade audio from clip 1 to clip 2, duration 1 second
  • PTS-STARTPTS+5/TB the second clip needs to start 5 seconds after the first clip 5/TB is “5 divided by time base

Thanks to this post: https://opensourceforu.com/2015/04/get-friendly-with-ffmpeg/

Mux mp4 + subtitles into an MKV

ffmpeg -fflags +genpts -i infile.mp4 -f srt -i subtitles.srt \
-map 0:0 -map 0:1 -map 1:0 -c:v copy -c:a copy -c:s srt outfile.mkv

-fflags +genpts necessary if you get Can't write packet with unknown timestamp errors.

Extract part of video using start + end times

Transcode to lossless Flac and while applying peak normalisation to audio track

# process tab delmited 'runsheet' file with format
#	output name	input name	start time	end time
# where start/end time are hh:mm:ss

tosecs() {
   date '+%s' --date="$1"

fromsecs() {
 printf "%02d:%02d:%02d\n" $h $m $s

for i in `cat runsheet`; do

	oname=`echo $i | cut -f1`
	iname=`echo $i | cut -f2 | tr -d "'"`
	start=`echo $i | cut -f3`
	end=`echo $i | cut -f4`

	if [[ -n $end ]]; then
		t=$(( $(tosecs "$end") - $(tosecs "$start") ))
		to="-t $(fromsecs $t)"
	if [[ -n $start ]]; then
		ss="-ss $start"
		ss="-ss 00:00:00"

	cmd="ffmpeg $ss $to -i \"../$iname\" -vcodec copy -acodec flac -filter:a loudnorm $oname.mkv"
	echo $cmd
	eval $cmd


Record from microphone, encode to Opus and send to network socket

ffmpeg -f pulse -i default -acodec libopus -b:a 96000 -vbr on -compression_level 10 -f rtp rtp://

Play from RTP stream:

Create SDP file using details output from ffmpeg command.

o=- 0 0 IN IP4
s=No Name
c=IN IP4
t=0 0
a=tool:libavformat 58.12.100
m=audio 1234 RTP/AVP 97
a=rtpmap:97 opus/48000/2
a=fmtp:97 sprop-stereo=1

Then pass SDP file to RTP client:

ffplay -i opus.sdp -protocol_whitelist file,udp,rtp

Video Capture Using EasyCAP USB analog to digital convertor

  • Capture audio from device1 (hw:1)
  • Capture video from /dev/video0 as PAL (720x576 50hz)
  • de-interlace (yadif)
  • encode video using ‘fast’ preset (using slow was getting dropped frames)
  • encode audio as AAC 128kb

software encoding

ffmpeg \
  -f alsa -ac 2 -thread_queue_size 512 -i hw:1 \
  -f v4l2 -standard PAL -thread_queue_size 512 -i /dev/video0 \
  -vf yadif -c:v libx264 -preset fast -crf 23 \
  -c:a aac -b:a 128k \

hardware encoding using VAAPI

ffmpeg \
  -f alsa -ac 2 -thread_queue_size 1024 -i hw:1 \
  -f v4l2 -standard PAL -thread_queue_size 1024 -i /dev/video0 \
  -vaapi_device /dev/dri/renderD128 -vf 'format=nv12,hwupload' -threads 4 -vcodec h264_vaapi -qp:v 23 \
  -c:a aac -b:a 128k \

No deinterlacing with hardware encoding :(

Convert sequence of JPEG images to MP4 video

Simple glob:

ffmpeg -r 24 -i '*.JPG' -s hd1080 -vcodec libx264 timelapse.mp4

Start from DSC_0079.JPG

ffmpeg -r 24 -f image2 -start_number 79 -i DSC_%04d.JPG -s hd1080 -vcodec libx264 timelapse2.mp4
  • -r 24 - output frame rate
  • -s hd1080 - 1920x1080 resolution

Slower, better quality

Add the following after -vcodec libx264 to achieve better quality output

-crf 18 -preset slow

Bulk convert JPGs to 1920x1080, centered

convert input.jpg -resize '1920x1080^' -gravity center -crop '1920x1080+0+0' output.jpg

10 Mar 2018, 00:00

Selling a motor vehicle online

I recently sold a motorcycle via online classifieds. It was a learning experience - more accurately a re-learning experience. I’ve sold motor vehicles privately before and encountered many of the same things - it’s just that I do it so infrequently that I forget the tricks and the traps!

What follows are some thoughts and advice to my future self, when next time I go to sell a motor vehicle:

  • figure out what a fair price is in the current market. It doesn’t matter what you paid for it. Sentimental value counts for nothing. All that matters is what are people willing to pay for it today, and that will largely be based on what other vehicles of same spec are going for in your area. This can change from month to month (and even go up!), so it’s important to keep an eye on things and adjust accordingly.
  • negotiating on price is a given: most buyers will ask for a discount, so factor that into your listing price
  • set your lowest price from the beginning and stick to it. It’s OK to adjust it, just not when you’ve got a buyer pressuring you!
  • never disclose your lowest price! Almost every second enquiry asked me outright: “what’s your lowest price?”. There are a few approaches to this question:
    • say something like ‘all reasonable offers will be considered’ - throw the ball back in their court
    • say that you’re happy to negotiate but only in person, and suggest they come and inspect the item. I mean, how serious can they really be if their making out like they’d buy it without looking at it in person!?
    • give them a token discount, and say “I’d be happy to take $X” - something well above your lowest price
  • be patient. You’ll likely get a few vultures sweep in early offering a low bid. Hold out and you will be rewarded
  • there are better times of year to sell than others. The warmer months are better times to sell a motorcycle. Also, avoid busy holiday periods e.g. Christmas/New Years
  • commercial buyers (car yards etc) should be a last resort - they will only ever offer you well below a fair market price
  • your time is precious; don’t waste it on people who are ‘tire kickers’:
    • don’t negotiate price via text message. Tell people that you’re willing to negotiate but via phone or in person only. That will weed out 90% of the time wasters
  • Be polite but firm. There’s nothing to be gained by being rude or getting angry with people.

A real life example of a problem buyer:

Him: “hey, can you deliver to [town name]?” (place 2hrs away from where I live!)
Me: “No, sorry”
Him: “ok, what’s your bottom price seeing as you won’t deliver”
Me: “I’d be happy with $X” (my listed price minus a bit, nowhere near the bottom)
Him: (proceeds to ask 100 questions about the bike)
Me: (answer politely)
Him: “hey, I don’t have a ramp to load the bike. Do you?”
Me: “sure no problem”
Him: “OK, I’ll offer you $Y…” (lower, but still reasonable) “including the ramp”
Me: “I can do $XY…” (half way between X and Y.) “including the ramp”
Him: “I’m offering $Y. You sure you still want $XY?” (LOL!)
[…time passes…]
Him: “I’m in another state (News to me! Turns out the place he mentioned earlier was just the half way point.) which means the registration is worthless to me. I’m offering $Z (silly low ball) and that’s my final offer”
Me: (sigh) “The price is $XY minus the registration component, take it or leave it”

10 Feb 2018, 09:35

Wireguard VPN

UPDATE 2018-08-06 Wireguard has been submitted for inclusion into the Linux Kernel source tree.

I recently stumbled upon what I think may be the holy grail - a VPN method that is simple to configure, high performance, and (so I’m told) highly secure. Until now my experience of using VPNs was that you could choose any two of the above, but never expect to get all three!

The ease by which you can get this up and running is quite astonishing. The documentation is quite good, but still has a few holes which hopefully will be covered here, adapted from https://www.wireguard.com/install/

Wireguard is conceptually quite different to other VPN products in that there isn’t a daemon that runs - it all happens in the linux kernel. There also isn’t any state: no concept of a tunnel being ‘up’ or ‘down’ - just a standard network interface with configuration applied to it - not dissimilar to a wifi interface. This has the advantage of allowing traffic to route seamlessly between, for example, fixed and wireless connections.

NOTE: Wireguard is not yet merged into mainline kernel which means compiling the required kernel module from source. Fortunately, thanks to DKMS this step is painless.


Both ends of the VPN described here are running stock Centos7

$ curl -Lo /etc/yum.repos.d/wireguard.repo https://copr.fedorainfracloud.org/coprs/jdoss/wireguard/repo/epel-7/jdoss-wireguard-epel-7.repo
$ yum install epel-release
$ yum install wireguard-dkms wireguard-tools


Configuring wireguard can be done from command line with ip (from the iproute package) and wg (from wireguard package) commands. I would recommend however not doing that, but instead using the included systemd service file which reads from a config file, described below.

Each endpoint has a single config file, similar to this: /etc/wireguard/wg0.conf

ListenPort = 51820
PrivateKey = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Address =

PublicKey = yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy
AllowedIPs =,,
Endpoint = x.x.x.x:51820
  • Endpoint = x.x.x.x:51820 corresponds with the public IP and listening port of your peer.
  • AllowedIPs = is set to any IPs or subnets that should be routed via this tunnel.

IPv4 routing is enabled. This is set in /etc/sysctl.conf with: net.ipv4.ip_forward = 1


The public/private key are generated using the wg utility. wg genkey generates the private key string. This string can then be piped into wg pubkey to generate a corresponding public key e.g. all-in-one command

$ wg genkey | tee /dev/tty | wg pubkey

The private key (first line) goes into local config file, and the public key goes into the peer’s config file.


Systemd can bring the VPN up/down using the included wg-quick service file. To set the VPN to come up on boot enable the service:

systemctl enable wg-quick@wg0

Now, start/stop the service like so:

systemctl start wg-quick@wg0
systemctl stop wg-quick@wg0

This adds the wg0 interface, and inserts routes corresponding with the list of allowed IPs specified in the config file.



The VPN itself uses a single UDP port. For the VPN tunnel to connect, both ends must be able to reach the other on UDP port 51280. The port number is configurable.


  • the tunnel uses the addresses for A-end and for B-end
  • routes for each end’s network(s) are sent via the VPN interface wg0


Having done all the above, if things don’t appear to be working out, here’s some things to look at first:

  • Check systemd log for the wg-quick@wg0 service: journalctl -u wg-quick@wg0
  • Check the wg0 interface is up with ip addr:
6: wg0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 8921 qdisc noqueue state UNKNOWN qlen 1
    inet scope global wg0
       valid_lft forever preferred_lft forever
    inet6 fe80::5e6:1a69:4213:44ba/64 scope link flags 800 
       valid_lft forever preferred_lft forever
  • Attempt to ping the endpoing on the other end of the IP e.g. ping
  • Run tcpdump on each endpoint to see what traffic is coming in/out of the ethernet interfaces (eth0). Encrypted VPN traffic will show up as UDP packets on port 51280.
  • Run tcpdump on each endpoint’s wireguard interface (wg0) to see what’s passing over the tunnel itself.

Providing that the wireguard config is correct - keys match up, and allowed IPs are set - then you’re going to be dealing with a routing or firewalling issue somewhere in between.

02 Feb 2018, 21:40

OpenWRT Geofencing

Geofencing refers to allowing or blocking a connection based on country of origin. Unfortunately, OpenWRT / LEDE does not support geofencing out of the box.

The way I’ve worked around this is described below.

Shell script

I created a shell script /root/bin/ipset-update.sh to pull Maxmind Geolite2 database periodically, and use that to populate an ipset set which I can then reference from iptables.

The shell script is based on this one with a few tweaks for Openwrt: change bash to ash and change the paths of some of the utilities ipset, curl etc

The shell script takes 2 arguments: the ipset name to create/update, and the country name to pull out of the Geolite database.

If I call it like this: /root/bin/ipset-update.sh australia Australia then the resulting ipsets are australia4 and australia6 (ipv4 + ipv6 respectively)


The Maxmind database changes over time as, so it’s important to update it on a periodic basis. I installed a cronjob as root user to run the script once per week:

0 0 * * sun     /root/bin/ipset-update.sh australia Australia


From the command line, you can use the new ipset like so:

iptables -A INPUT -p tcp -m tcp --dport 22 -m set --match-set australia4 src -j ACCEPT
ip6tables -A INPUT -p tcp -m tcp --dport 22 -m set --match-set australia6 src -j ACCEPT

I prefer to use the Luci interface for configuring the firewall. While it doesn’t support setting an ipset target directly it does allow you to specify arbitrary iptables switches. When creating a port forwarding or traffic rule that requires geofencing, I put the following in the Extra arguments: section: -m set --match-set australia4 src.

As the interface only allows one ipset to be specified per rule, you can either create multiple rules for multiple countries, or create one ipset that combines multiple countries into a group.

Survive reboot

At boot time rules that use ipsets will fail to load as the ipsets will not exist at that point. To work around that I put the following lines into the /etc/rc.local:

/root/bin/ipset-update.sh australia Australia
/etc/init.d/firewall restart

21 Dec 2017, 21:40

Upload to S3 from URL

I recently had the need to transfer a large file from a remote host to S3 but had insufficient local storage to make a temporary local copy. Fortunately AWS command line tools allows for this by reading the piped output of curl as follows:

curl https://remote-server/file | aws s3 cp - s3://mybucket/file

22 Nov 2017, 21:40

Mac Mini + Centos7

I recently had a need to install Linux on a 2014 Mac Mini. Naturally I chose Centos! 😄 I had some trouble finding a straightforward HOWTO to follow, so for the benefit of others wanting to do the same thing, here are the steps I took:

  • download the latest Centos 7 minimal ISO, and transfer that to a USB stick using dd e.g.
sudo dd if=CentOS-7-x86_64-Minimal-1708.iso of=/dev/sdb bs=8M
  • insert the USB stick into the Mac Mini
  • (re)boot the Mac Mini and hold the C key down immediately after power on - this will take you to a boot disk selection screen
  • select the USB stick (labelled ‘efi’) and proceed to boot from it
  • from here is a standard Centos install routine, with the exception of disk partitions:
    • peform a manual disk partition setup.
    • You should see 3 existing logical partitions: the EFI partion, the partition holding the existing MacOS install, and a recovery partition of around 600MB.
    • wipe the MacOS partition and add your Centos mountpoints there as required. (keep the MacOS recovery partition in case you want to revert back in future)
    • ensure that the existing EFI partition gets mounted to /boot/efi
  • proceed with Centos install

Something odd I noticed was that the onboard NIC did not show any link light when plugged in, however the network was connected (after I ran dhclient to get an IP).

02 Jul 2017, 14:17

Be your own tunnel broker: 6in4

The article describes how to configure a 6in4 service using your own VPS host. Tunnelling is done using protocol 41 which encapsulates IPv6 inside IPv4.

Unfortunately my broadband provider does not offer IPv6. To work around that I tunnel to my VPS host over IPv4 and use IPv6 that way. I could use a tunnel broker such as Hurricane Electric, however their closest endpoint is far enough away that the additional latency makes it a pretty unattractive option. My VPS provider is close enough that latency over the tunnel is actually not much different to native IPv4!

Tunnel Configuration

For this example, the VPS host public IP is x.x.x.x and the home broadband public IP is y.y.y.y

VPS host

My VPS has allocated a /56 prefix to my host - aaaa:bbbb:cccc:5b00::/56. From that I’m going to sub allocate aaaa:bbbb:cccc:5b10::/60 to the tunnel, as follows:

  • aaaa:bbbb:cccc:5b10::a/127, aaaa:bbbb:cccc:5b10::b/127 - each end of the tunnel
  • aaaa:bbbb:cccc:5b11::/64- subnet for use on the home network
# Create sit interface 'sittun'
ip tunnel add sittun mode sit local x.x.x.x remote y.y.y.y ttl 64 dev eth0
# Allocate an IPv6 address to the local end (remote end will be ::b)
ip addr add dev sittun aaaa:bbbb:cccc:5b10::a/127
# Route a /64 prefix down the tunnel for use on the home network
ip -6 route add aaaa:bbbb:cccc:5b11::/64 via aaaa:bbbb:cccc:5b10::b
# Bring the interface up
ip link set dev sittun up

Home router

ip tunnel add sittun mode sit local y.y.y.y remote x.x.x.x ttl 64 dev enp1s0
ip adddr add dev sittun aaaa:bbbb:cccc:5b10::b/127
# VPS host IP is the default route for all IPv6 traffic
ip -6 route add default via aaaa:bbbb:cccc:5b10::a
ip link set dev sittun up

If the router does not have a public IP (behind a NAT device), then it is necessary to specify the private IP for the local end rather than the public IP e.g. ip tunnel add sittun mode sit local remote x.x.x.x ttl 64 dev enp1s0 The NAT device will then need to forward 6in4 traffic to

Firewalling / Routing

VPS Host

The VPS host needs to have routing enabled for IPv6:

sysctl -w net.ipv6.conf.all.forwarding=1
sysctl -w net.ipv6.conf.eth0.accept_ra=2

The second command is required if eth0 has a SLAAC assigned IP (most likely).

The VPS host needs to allow protocol 41 packets from the client IP. The following iptables command will do:

iptables -I INPUT -p 41 -s y.y.y.y -j ACCEPT

The following rules are required in the ip6tables FORWARD chain to permit connectivity between the home network and the Internet:

ip6tables -I FORWARD -i sittun -j ACCEPT
ip6tables -I FORWARD -o sittun -j ACCEPT

Home router

We need v6 ip forwarding:

sysctl -w net.ipv6.conf.all.forwarding=1

Allow protocol 41 from our VPS host:

iptables -I INPUT -p 41 -s x.x.x.x -j ACCEPT

The home network needs some basic firewall rules to protect it from unrestricted access from the IPv6 Internet. The following is a suggested minimal ruleset:

# Allow return traffic from the internet
iptables -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT
# ICMPv6 is required for IPv6 to operate properly
iptables -A FORWARD -p ipv6-icmp -j ACCEPT
# Allow all from your LAN interface
iptables -A FORWARD -i <lan interface> -j ACCEPT
# Reject all else
iptables -A FORWARD -j REJECT --reject-with icmp6-adm-prohibited

25 Jun 2017, 13:38

HAProxy: rate limits + TLS SNI

At work we have been using AWS elastic load balancers for some time now and found it simple to use and reliable. Unfortunately, the tradeoff for that simplicity is lack of features & control. The main issue we’re facing is the need to implement basic rate-limiting controls on our web frontend to reduce the impact of abuse. I’m actually a little surprised that AWS do not offer some basic ratelimit functionality in ELB (maybe it’s coming?). The other annoyance is having to provision a separate ELB instance for each of our SSL certificates due to lack of SNI support.

So I’m investigating the possibility of replacing our multiple ELB instances with a pair of HA proxy instances running on ec2. This post is a place to dump notes & thoughts on the process.


What I’m aiming for will have the following properties:

  • high availability - both in terms of frontend (client facing) and backend (application server facing)
    • frontend: auto-failover between public IP addresses assigned to HAproxy instances
    • backend: auto-failover between application servers being proxied to
  • rate limiting capability - layer 4 minimum, ideally upto layer 7
  • TLS SNI - allow multiple SSL certificates to be served from the same IP addresss

Frontend high-availability

The plan is to use AWS route53 DNS health checks to provide a DNS based failover mechanism. This can be done one of two ways:

  • active/standby configuration: route53 will return the primary IP (if healthy), otherwise will return the secondary IP
  • multi-value configuration: route53 will return whichever IPs are considered healthy


If all we needed was basic rate limiting based on client IP address, then the simplest solution would be a firewall out in front running iptables and its built in rate limiting functionality. This is not suitable for our needs as we need a) more intelligent rate limiting capability and b) the ability to maintain rate limit state between multiple frontend peers. HAProxy provides a solution to both of these needs.

On a), HAPproxy allows a rate limit to be applied to almost any aspect of a TCP or HTTP transaction. On b), sharing of rate limit counters between HAProxy peers was added in HAProxy 1.6 with the caveat ‘must be used for safe reload and server failover only’. For a pair of HAProxy nodes in a low traffic scenario, I’m betting this will be ‘good enough’ for my HA needs.


The following is relevant parts of haproxy.cfg. This isn’t supposed to be any kind of ‘production’ config, just used for testing purposes. The config was built using the following resources:

peers hapeers
    peer haproxy1
    peer haproxy2

frontend https
    # *.pem files read from directory '/etc/haproxy/ssl'. 
    # Certificate will be matched against SNI, otherwise first certificate will be used
    bind *:443 ssl crt /etc/haproxy/ssl/
    default_backend bk_one

    tcp-request inspect-delay 5s

    stick-table type ip size 200k expire 30s peers hapeers store gpc0

    # backends increments frontend gpc0 (sc0_inc_gpc0) on abuse, which we're checking here
    acl source_is_abuser src_get_gpc0 gt 0

    # don't track abuser while it's getting redirected to rate-limit
    tcp-request connection track-sc0 src if !source_is_abuser
    tcp-request content accept if { req_ssl_hello_type 1 }
    # tell backend client is using https
    http-request set-header X-Forwarded-Proto https if { ssl_fc }

    # redirect abuser to rate-limit backend until their entry expires (30s above)
    use_backend rate-limit if source_is_abuser

    use_backend bk_one if { ssl_fc_sni -i demo1.example.com }
    use_backend bk_two if { ssl_fc_sni -i demo2.example.com }

# mostly the same as 'https' frontend, minus SSL bits
frontend http
    bind *:80
    default_backend             bk_one

    tcp-request inspect-delay 5s

    stick-table type ip size 200k expire 30s peers hapeers store gpc0

    # backends increments frontend gpc0 (sc0_inc_gpc0) on abuse, which we're checking here
    acl source_is_abuser src_get_gpc0 gt 0

    # don't track abuser while it's getting redirected to rate-limit
    tcp-request connection track-sc0 src if !source_is_abuser

    # redirect abuser to rate-limit backend until their entry expires (30s above)
    use_backend rate-limit if source_is_abuser

    use_backend bk_one if { hdr(Host) -i demo1.example.com }
    use_backend bk_two if { hdr(Host) -i demo2.example.com }

backend bk_one
    balance     roundrobin
    server  app1 web.a:80 check
    server  app2 web.b:80 check

    stick-table type ip   size 200k   expire 5m  peers hapeers  store conn_rate(30s),bytes_out_rate(60s)

    tcp-request content  track-sc1 src
    # 10 connections is approxmately 1 page load! Increase to suit
    acl conn_rate_abuse  sc1_conn_rate gt 10
    acl data_rate_abuse  sc1_bytes_out_rate  gt 20000000

    # abuse is marked in the frontend so that it's shared between all sites
    acl mark_as_abuser   sc0_inc_gpc0 gt 0
    tcp-request content  reject if conn_rate_abuse mark_as_abuser
    tcp-request content  reject if data_rate_abuse mark_as_abuser

backend bk_two

    [... same as bk_one, just using different backend servers ...]

backend rate-limit
    # custom .http file displaying a 'rate limited' message
    errorfile 503 /usr/share/haproxy/503-ratelimit.http

12 Jun 2017, 21:40

Geotrust SSL chain + Zimbra

I recently ordered a RapidSSL SHA256 CA cert for one of my Zimbra servers. I had all sorts of trouble getting openssl to verify the complete SSL chain - intermediates, plus CA certs.

The RapidSSL docs provides a link to an SSL bundle here, however that alone is not sufficient to allow Openssl to completely verify the chain. I downloaded the bundle and put that into file ca_chain.crt, then ran openssl verify but got this error:

# openssl verify -verbose -CAfile ca_chain.crt cert.pem 
cert: C = US, O = GeoTrust Inc., CN = GeoTrust Global CA
error 2 at 2 depth lookup:unable to get issuer certificate

It turns out the bundle supplied by RapidSSL is only intermediates, and does not include the very top level cert. I didn’t realise this at first which caused a bit of confusion. I ended up stepping through each certificate to figure out where the missing link was. I did this by splitting out each cert into a separate file and running openssl x509 -in <certfile> -text -noout and looking at the Issuer: line to see which cert comes next in the chain, then checking that one in turn.

After that exercise I realised I was missing the top level certificate - ‘Equifax Secure Certificate Authority’:

# openssl x509 -in ca.crt -text -noout
        Version: 3 (0x2)
        Serial Number: 1227750 (0x12bbe6)
    Signature Algorithm: sha1WithRSAEncryption
        Issuer: C=US, O=Equifax, OU=Equifax Secure Certificate Authority

I found that here: https://knowledge.rapidssl.com/support/ssl-certificate-support/index?page=content&actp=CROSSLINK&id=SO28589

Once I appended that cert to my bundle file, verify then returned OK:

# openssl verify -CAfile ca_chain.crt cert 
cert: OK

10 May 2017, 21:40

Building LEDE / Openwrt for x86

EDIT: 2018-03-12, LEDE and Openwrt have merged. References to LEDE here can be substituted for Openwrt.

I had a need to run LEDE on x86 hardware. Building a custom LEDE seemed a bit daunting at first, but turned out to be quite straight forward. The build described here is tailored for Qotom J1900 mini PC.

Building the custom image

I chose to build the LEDE x86_64 image within a Docker container like so:

$ docker pull centos
$ docker run -it centos /bin/bash
<container>$ cd root/
<container>$ yum install wget make gcc openssl which xz perl zlib-static ncurses-devel perl-Thread-Queue.noarch gcc-c++ git file unzip bzip2
<container>$ wget https://downloads.lede-project.org/releases/17.01.1/targets/x86/64/lede-imagebuilder-17.01.1-x86-64.Linux-x86_64.tar.xz
<container>$ tar -xvJf lede-imagebuilder-17.01.1-x86-64.Linux-x86_64.tar.xz
<container>$ cd lede-imagebuilder-17.01.1-x86-64.Linux-x86_64

Build the image. I want USB keyboard support, and don’t need e1000 or realtek drivers

 make image packages="-kmod-e1000e -kmod-e1000 -kmod-r8169 kmod-usb-hid kmod-usb3 kmod-usb2"

The new images are located under ./bin/targets/x86/64 inside the build environment

# ls -l bin/x86/64
total 35852
-rw-r--r--. 1 root root  5587318 May  9 20:36 lede-17.01.1-x86-64-combined-ext4.img.gz
-rw-r--r--. 1 root root 19466174 May  9 20:36 lede-17.01.1-x86-64-combined-squashfs.img
-rw-r--r--. 1 root root  2439806 May  9 20:36 lede-17.01.1-x86-64-generic-rootfs.tar.gz
-rw-r--r--. 1 root root     1968 May  9 20:36 lede-17.01.1-x86-64-generic.manifest
-rw-r--r--. 1 root root  2711691 May  9 20:36 lede-17.01.1-x86-64-rootfs-ext4.img.gz
-rw-r--r--. 1 root root  2164670 May  9 20:36 lede-17.01.1-x86-64-rootfs-squashfs.img
-rw-r--r--. 1  106  111  2620880 Apr 17 17:53 lede-17.01.1-x86-64-vmlinuz
-rw-r--r--. 1 root root      731 May  9 20:36 sha256sums

Just need the combined-ext4 image. Copy that out from the docker container to USB flash drive:

$ docker cp <container id>:/root/lede-imagebuilder-17.01.1-x86-64.Linux-x86_64/bin/targets/x86/64/lede-17.01.1-x86-64-combined-ext4.img.gz /mnt

Installing the custom image

  • boot the mini PC using any Linux rescue disk. (I used SystemRescueCD)
  • insert a second USB flash disk containing the image created above
  • write the image to the mini PC internal drive:
$ mount /dev/sdc1 /mnt; cd /mnt
$ gunzip lede-17.01.1-x86-64-combined-ext4.img.gz
$ dd if=lede-17.01.1-x86-64-combined-ext4.img.gz of=/dev/sda
  • (optionally) resize the LEDE data partition to fill the entire size of the internal storage
    • use fdisk/parted to remove second partition (/dev/sda2)
    • re-add second partition with the same starting block as before, but make the end block the last block on the disk
    • save the new partition table
    • run e2fsck -f /dev/sda2 followed by resize2fs /dev/sda2
  • reboot the device
  • access the console via VGA console, or telnet to IP (no root password!)

06 May 2017, 21:40

Open source router

I recently went through the exercise of setting up a gateway router for one of my customers. The choices I had to make were two-fold, hardware & software


I wanted to try and find the sweet spot between affordability, processing power, reliability. I could pickup an old desktop PC for $0 which would be more than adequate in terms of performance, however I wasn’t confident it would last the distance running 24x7 in a non air-conditioned storage room!

A low power ARM chip on a consumer router (that would support OpenWRT) was my next thought, however these tend to be a little underpowered for what I needed, not to mention very limited in terms of RAM + persistent storage.

I ended up getting a ‘mini pc’ with the following properties:

  • fan-less (heat dissipation via heat sink & aluminium chassis)
  • low power consumption quad-core x86-64 CPU
  • 2 GB RAM, 16GB SSD flash (expandable)
  • 4x 1GB ethernet ports

AUD$250 including delivery from Aliexpress. Something the above lacks which others may want is hardware offload for crypto (AES-NI)


This was a harder choice in a lot of ways - there are so many options!! While the hardware I have is capable of running pretty much any Linux or BSD distro, I decided at the outset that I really needed a purpose built firewall distro that includes a web gui interface. I reviewed the following:


https://www.pfsense.org/ · FreeBSD based

Being possibly the best known open source firewall distro available, I felt obliged to check it out. Certainly very slick, and years of constant refinement certainly shine through.

At the end of the day, I feel a certain unease about the future direction of pfSense. The open-source community does seem to be taking a back seat as the public face becomes more corporate friendly.


https://opnsense.org/ · FreeBSD based

OPNSense is a fork of pfSense and as such is very similar in many ways. Something that really impressed me about the project is the enthusiasm and effort being put in by the core developers. I submitted a couple of bug reports to their Github repo and both were fixed very quickly. The UI is quite different to pfSense as it has been completely reworked, and equally slick and easy to use as pfSense while possibly lacking some of the whistles and bells.

Definitely one to keep an eye on.


http://www.ipfire.org/ · Linux based

I’m afraid I could spare much time for this distro. The web UI is looking very dated. I’m sure it does the job, but without a nicer UI experience, I may aswell just stick to the command line.


https://openwrt.org/ · Linux based

OpenWRT is designed for low end, embedded hardware and what they’ve managed to achieve with such limit hardware resources is astonishing! Sadly x86 support is lacking - the prebuilt image I used didn’t detect all CPU cores or available RAM!? - so was crossed off the list pretty quickly.

If you’re after a distro for your wifi/modem/router device, then OpenWRT fits the bill nicely. A word of warning however, the documentation is atrocious! But hey, I’ll take what I can get.

LEDE Project

https://lede-project.org/ · Linux based

LEDE is a fork of OpenWRT. As such, it’s a younger project which seems to have a more vibrant community than its parent. I had originally passed it over, assuming it would be more or less identical to OpenWRT given how recently it forked. Somebody pointed me back to it citing better x86 support, so I thought I’d give it a spin. I’m glad I did as, this is what I’ve ended up using for my install!


I ended up going with LEDE for these reasons:

  • runs Linux. I’m simply more comfortable with Linux on the command line which gives me more confidence when things go wrong.
  • is an extremely light weight distro out of the box that offers advanced functionality via an easy to use packaging system
  • a gui that strikes a good balance between usability, feature set and simplicity
  • supports my x86 hardware (unlike OpenWRT)

Update December 2017

I’ve been using LEDE for 6 months and overall very happy with it. There are a couple of issues I’ve encountered worth mentioning:

  • I found the firewall configuration confusing where it talks about ‘zone forwardings’ vs iptables ‘forward’ chain. I wrote this Stack Exchange post to clarify (and remind myself how it works!)
  • upgrading between LEDE releases is far from fool-proof. The upgrade process requires you to upgrade the running system in place. Upon reboot, you’re left holding your breath wondering if it’s actually going to boot! Not something I’d ever want to attempt remotely. Better approaches I’ve seen allow you to load the new software version into a secondary partition that you then flag as being the next place to boot from (Ubiquiti works this way).

22 Apr 2017, 21:40

Docker and IPTables on a public host

NOTE: This post applies to Docker < 17.06

By default docker leaves network ports wide open to the world. It is upto you as the sysadmin to lock these down. Ideally you would have a firewall somewhere upstream between your host and the Internet where you can lock down access. However, in a lot of cases you have to do the firewalling on the same host that runs docker. Unfortunately, Docker makes it tricky to create custom iptables rules that take precedence over the allow-all ruleset that Docker introduces. There is a pull request that promises to help in this regard.

Until the fix is available, [EDIT: fixed in 17.06] the way I work around this problem is as follows:

Create a systemd service that runs my custom rules after the Docker service starts/restarts - /etc/systemd/system/docker-firewall.service:

Description=Supplementary Docker Firewall Rules



The file /usr/local/bin/docker-firewall.sh is a shell script which simply inserts IPTables rules at the top of the ‘DOCKER’ chain:

# called by docker-firewall.service
# Work around for controlling access to docker ports until PR#1675 is merged
# https://github.com/docker/libnetwork/pull/1675


$IPTABLES -I $CHAIN -i eth0 -s <trusted IP> -j ACCEPT
$IPTABLES -I $CHAIN -i eth0 -p tcp -m multiport --dport <port list> -j ACCEPT


  • you should modify these rules to suit. Put only those ports that you want open to the world in the <port list>, and any trusted IPs that should have access to all ports in the <trusted IP> field.
  • the rules are specific in reverse order, as they are being inserted at the top of the chain. You could instead specify the insert index (e.g. 0,1,2).
  • make sure the shell script is executable (chmod +x).
  • I’ve chosen to RETURN Internet traffic that doesn’t match the first 2 rules, but you may choose to simply DROP that traffic there. Either way, the last rule must take final action on Internet traffic to ensure that subsequent Docker rules don’t allow it in!!

Once those files have been created, you can enable the service with systemctl enable docker-firewall. Now when you restart Docker (or upon reboot), this service will run afterwards and you’ll see your custom rules appear at the top of the DOCKER chain in iptables.

19 Feb 2017, 21:40

Git over HTTP with Apache

I had a requirement at work recently for our Git repo to be reachable via HTTP. I looked at Gitlab however came to the conclusion that it was probably overkill for our situation. I went with the following setup instead which can be run on the most minimal VM e.g AWS EC2 Nano instance.

The instructions were intended for Centos7 however should work without much modification for other distros.

  • Install required packages:
yum install httpd git mod_ssl
  • Ensure that ‘mod_cgid’ + ‘mod_alias’ are loaded in your Apache config
  • Append the following config to /etc/httpd/conf/httpd.conf to force SSL by redirecting non-SSL to SSL:
<VirtualHost *:80>
	ServerName gitweb.example.com
	Redirect permanent / https://gitweb.example.com/
  • Modify /etc/httpd/conf.d/ssl.conf and add this to the default vhost:
SetEnv GIT_PROJECT_ROOT /data/git
ScriptAlias /git/ /usr/libexec/git-core/git-http-backend/

<LocationMatch "^/git/">
        Options +ExecCGI
        AuthType Basic
        AuthName "git repository"
        AuthUserFile /data/git/htpasswd.git
        Require valid-user
  • Create git repo and give Apache rw permissions:
mkdir /data/git
cd /data/git; git init --bare myrepo
mv myrepo/hooks/post-update.sample myrepo/hooks/post-update
chown apache:apache /data/git -R

File post-update should now contain:

exec git update-server-info
  • Create htpasswd file to protect repo:
htpasswd -c /data/git/htpasswd.git dev
  • Update SELinux to allow Apache rw access to repos:
semanage fcontext -a -t httpd_sys_rw_content_t "/data/git(/.*)?"
restorecon -v /data -R
  • Start Apache:
systemctl start httpd
systemctl enable httpd
  • Push to the repo from your client as follows:
git push https://gitweb.example.com/git/myrepo -f --mirror
  • Pull from repo to your client as follows:
git pull https://dev@gitweb.example.com/git/myrepo master

04 Jan 2017, 01:17

Australia and Asylum Seekers

Australia’s Immigration Detention Situation

Australia receives a lot of criticism from all sides, including from within, for its stance on illegal immigration. AFAIK, the current policy for anyone arriving illegally by sea is automatic detention in an offshore facility. The reason for this stance is deterrence - to discourage others from making the same journey. Automatic detention is pretty obvious, but the offshore component is not so obvious. Again, from what I understand the reason for that is due to legal issues. If Australia were to detain immigrants onshore, then those immigrants would have many more options for pursuing legal challenges in Australian courts. When kept offshore, those avenues are denied to them.

Why is it that we have around 1000 people locked up in detention several years after the main influx arrived? As far as I know, the majority of these people have been assessed and found to be genuine refugees (as per UN convention). So why are they still detained? I’ve found it incredibly hard to get any information to answer this question.

From what I understand, Australia is not obliged to settle these people even if they have been assessed as being refugees and in fact, the Australian government has said their policy is to not settle these people under any circumstances! Instead, various deals have been made with other countries to take these people but for various reasons these deals have either fallen through, or the asylum seekers have refused the offer of re-settlement.

We are constantly told by the mainstream media that the detention centres are hell on earth, inhumane etc. The whole issue is so heavily politicized that I find it very hard to trust either side’s narrative. The obvious question remains however - why would an asylum seeker choose to stay in detention rather than be re-settle somewhere?

It has also been reported that asylum seekers have been offered large sums of money to return to their home countries, but almost all have refused. If someone genuinely fears for their life if returned to their home country, then this would make perfect sense, however I find it hard to believe that this is the reality for most of these people. So what other explanation can there be?

The answer to me seems clear: they believe they still have a chance to be settled in Australia, despite all the government’s proclamations to the contrary. And to get into Australia (or any comparable Western democracy) is by far and away the best option possible in their minds - including staying in detention, and enduring whatever hardships exist there. We live in a very connected world, and these people are no exception. They are in touch, via the Internet, with people who have been settled in various place around the world and can tell them first-hand what conditions are like on the ground. Australia scores near the top of the list in terms of a desirability score.

Where is the balance between rule-of-law and compassion?

Clearly Australia must control its borders. It must have the option of denying people entry who do not qualify as genuine refugees.

But what is a ‘genuine’ refugee? There must be a definition that countries can use which holds up to legal scrutiny. The UN refugee convention states that:

A person who owing to a well-founded fear of being persecuted for reasons of race, religion, nationality, membership of a particular social group or political opinion, is outside the country of his nationality and is unable or, owing to such fear, is unwilling to avail himself of the protection of that country; or who, not having a nationality and being outside the country of his former habitual residence as a result of such events, is unable or, owing to such fear, is unwilling to return to it. Refugee Council

I have no idea what the process would be for immigration officials to give someone a score against that definition. What I do know is that the issue of illegal immigration has been with us for a long time, so this isn’t a new problem. I can only hope the experts have their ways and means that don’t involve bribes or coercion.

The Australian Government is in an almost impossible position. How do you assess an asylum claim accurately with little or no documentation, and where the person in question, in a lot of cases, is actively seeking to derail the process from fear that a true assessment might result in an unfavourable outcome for them. If a true assessment can’t be made, what then? The person is detained until such time as:

  • new information comes to light relevant to the asylum application
  • the person voluntarily chooses to return home
  • Australia errs on the side of leniency and grants refugee status regardless.

We have a classic game of brinkmanship. Both sides waiting for the other side to blink: the government waiting for the asylum seeker to give up and go home; the asylum seeker hoping for the politicians to cave in due to political pressure etc.

09 Jul 2016, 13:38

White Australia, Blessing in Disguise

For approximately 100 years (c.1850-c.1950) Australia had a policy of preferring immigrants from Britain and European countries. The origins of the policy are rooted in the gold-rushes of the 19th century, and tensions between the majority white miners (both local and immigrant) and Chinese immigrant miners. In many cases the Chinese miners were more successful than their white counterparts due to their hard work ethic and ability to work cooperatively amongst themselves - traits that hold true today. This success, combined with the social barrier that different culture and language present, caused much resentment from whites leading to protests and riots.

The subsequent restrictions on non-white immigration were later referred to collectively as the ‘White Australia Policy’ (WAP), although this was never the official name. It should also be noted that immigrants of non-white ethnicities were never expelled from the country on the basis of their ethnicity during this period. Many Chinese Australians can trace their ancestry back to the miners of the 19th century.

Today the common narrative in academia and the media is that this policy was a bad thing, a stain on Australia’s history, and that we’ve progressed beyond such primitive and parochial ideas. To the contrary, I think the WAP has been a net positive, and modern Australians owe a debt of gratitude to the political leaders of that era for their foresight and resolve. The WAP allowed Australia to pass through its crucial adolescence years as a nation with one overarching culture - British - with other highly compatible Western European cultures being mixed in. This provided a solid foundation on which to build a national identity - something that modern Australians (of all ethnic backgrounds) can all unite around.

This is not to say that people with white skin or with european ancestry should get any preferential treatment or privileges - not at all!! It is to say however that someone from another cultural background living in Australia should be expected to learn to speak English & adopt the values of western civilisation: respect for individual rights, respect for the sanctity of life, rule of law, treating people fairly irrespective of gender, age, sexuality.

If Australia had not implemented the WAP then it is unlikely Australia would have a national identity that spans the entire continent. As such, it’s unlikely that we could have formed the commonwealth of Australia, and instead may have ended up with smaller nation states, divided by culture and language. In such a scenario, it’s hard to see how we would have withstood military interference from the likes of China or Indonesia, as it’s unlikely we would have the strong military and economic ties we currently enjoy with America.

28 May 2016, 13:38

Map Area Measurement Tool

Today a created a tool for measuring areas on maps: http://pace7.com/utils/maparea/

Map tiles courtesy of Thunderforest and map data courtesy of Openstreetmap.

08 Nov 2015, 13:38

Garmin Contour Map

For anyone who’s interested, I’ve created a contour map of South East Queensland that is suitable for use with Garmin devices (I use a GPSmap 62s).

You can download the .img file here.

To install, copy the file onto your SD card under the Garmin folder/directory. On your device make sure the map is ‘enabled’. The contour map will overlay contour lines over your main map. Have fun!

30 Oct 2015, 13:38


I’ve been using the Gumtree classifieds website a lot recently and have noticed some odd behaviour from people advertising their goods.

No photo

What’s the deal with so many ads not having a single photo of the item being advertised!? Isn’t it just common sense that an ad with a photo will be far more likely to get views than one without? Are these people really so lazy that they couldn’t be bothered to take a photo and upload it with their ad? Everyone has a mobile phone these days with a camera, and Gumtree has a mobile app which makes it very easy to upload photos…so there really is no excuse. I’m pretty confident if people had to pay for their advert, they might be inclined to make that little bit of extra effort.

Dodgy photos

OK, so the seller has made the effort to take a photo and upload it. Why in hell did they take the photo at night, with the item stuffed in a corner where it’s hardly visible?

Sob story

The number of ads I’ve seen where the seller tells some rambling story about their personal circumstances, why they’re forced to sell due to hard times, what they’re going to do with the money, and any number of other totally irrelevant details. Seriously, cut the crap and just give me the facts about the item already!

Little relevant detail

Some ads might have a fantastic photo, but barely any relevant details about the item. For example, I’ve seen many motor vehicles ads where the seller hasn’t bothered to list the essentials: make, model, year of manufacture, mileage


Swaps, swaps, swaps! It seems like every second ad wants to swap their thing for some other thing. Often times, what they want to swap for is a completely different category/class of significantly different value….what are these people smoking?

No time wasters, No scammers (please)

So many ads telling time-wasters, tire kickers, scammers to stop doing….what they do. Time for a reality check - if you sell anything in a public Internet forum you will, at some point, encounter these people. People who whinge about it in their ad are just turning off the honest buyers (at least that’s the effect it has on me).

20 Aug 2015, 13:38

Dishwasher vs Hand Wash

At the end of a dinner party recently I got up and offered to help ‘wash the dishes’. Some of the other guests responded in somewhat astonished tone ‘oh, no need - she has a dishwasher!’ (she being the hostess) as if the dishes would somehow put themselves in the machine.

This got me thinking, just how much more time, effort, energy, water does a dishwasher actually save? The benefits of a washing machine (for washing clothes) is abundantly obvious to me. When you compare what people had to do prior to the invention of the washing machine with what we do now, the difference is starkly in favour of the washing machine. The benefits of the dishwasher, in comparison, are not as immediately apparent.

I should be quick to say that I have nothing against dishwashers per se. If people feel that it helps them, then great - use a dishwasher by all means. The problem I have is with what seems to be a blanket rule that dishwashers are always superior without any application of critical thinking.

This is something I would love to see a group like Mythbusters look into: setup an experiment where a standard load of dishes is cleaned via dishwasher vs hand wash, considering all the variables involved…Well it turns out that Mythbusters have already done this experiment. Unfortunately, the experiment was sponsored by a dishwasher company and so the result was always going to favour the dishwasher, duh!. Hopefully someone else will do a more fair & balanced comparison.

Other comparisons I found focused on water usage and hygiene - neither are things that I see as being major issues. Unless you hand wash under a constantly running tap, then water usage shouldn’t be that much different. On the point of hygiene, most of that seems to be scare mongering propagated by dishwasher companies! Time and effort are the two major concerns I have, followed closely by overall $$ cost.

I must confess that I have never lived in a house where a dishwasher was in regular use. I have pretty much always washed by hand as a result. When I have stayed in houses with a dishwasher, I’ve not tended to use them as I know how long it will take me to wash by hand, it doesn’t feel like a chore. I think that last point is key - I’m well practiced. People who are not used to washing by hand will naturally take longer to do the job and will actually be working harder due to their hands and arms not being used to the routine. When I wash by hand, the item moves from one side of the sink to the other side in a constant, steady stream - not much slower than it would take me to place that same item into the dishwasher. I also tend to leave items to drip dry - come back in a few hours and simply put the items away, using a tea towel to wipe any damp bits. Again, not much slower than taking the item from the dishwasher and putting it away. So far, the dishwasher is slightly ahead in time saving and effort required. Then we have the pots and pans that cannot be cleaned consistently in a dishwasher. Most people I know, don’t bother trying to put them in the dishwasher and end up washing by hand. So on that point, both methods are equal. Once you add on the extra costs of a dishwasher: upfront cost, electricity usage, special detergent - the overall benefits of a dishwasher, in my mind, are marginal at best.

A case where I see a mechanised dishwasher as being superior to hand washing would be anywhere that an industrial dishwasher can be used - for example restaurants, hospitals etc. I have no doubt that at that large scale a machine will save a huge amount of time and effort. For domestic use with typical family-sized loads, I don’t see a compelling case for the dishwasher.

19 Jun 2015, 13:38

Ad Blockers and the Web

With the recent announcement that iOS9 Safari will enable the blocking of web ads, has come much weeping and gnashing of teeth from those on the web who’ve come to rely on ad-based revenue. If Apple deliver on the ad blocking, then many of these ad-supported sites will be severly impacted. I, for one, have no sympathy whatsoever. The writing has been on the wall for a very long time now.

I moved to an iPhone from Android about 6 months ago. Not because I love Apple or iPhones, only that I was given the handset. I’ve been mostly happy with the move except for the ad situation. Android supports loading Firefox browser and Firefox supports loading AdBlock (one of the first things I do whenever installing Firefox!). No such option on iPhone, so all of a sudden I’m being bombarded with ads everytime I browse the web. It was getting to the point where I was seriously considering ditching the iPhone for Android! This announcement by Apple has caused me to hold off until the full details of what Apple are planning. I don’t know what Apple’s motivations are for taking this step. If they were listening to user demand, they would have moved a long time ago. As it is, I suspect they have some other scheme cooking. Perhaps they’ll the same trick as AdBlock, and make ad companies pay to be exempt from the ad blocking. If they do that, then I’m jumping ship pronto!

If the blocking of ads causes many web sites to close down and we return to the web of the early 2000’s - simple web pages containing amateur content - GREAT!! Don’t get me wrong, I use and appreciate the ‘pro’ websites, it’s just that I don’t depend on them. If they’re gone tomorrow, I won’t miss them.

UPDATE Oct 2015: it turns out that Apple have chosen not to offer the ad blocking feature to older 32 bit devices. That included my iPhone5. Needless to say, I wasn’t shelling out for a new iPhone. I sold my iPhone and bought and brand new Android handset for less than half of what I got for the iPhone! It’s been over a month now and I couldn’t be happier - no more ads!

11 May 2015, 19:18

OpenWRT and IPv6

I just configured my home network to use IPv6. My router runs OpenWRT ‘Barrier Breaker’ which supports IPv6, so it was just a matter of switching on and configuring the functionality.

Unfortunately, my ISP does not provide native IPv6 so I’m using an IPv6 tunnel courtesy of Hurricane Electric Tunnelbroker service.

Configuring my router

The 6in4 tunnel

Hurricane Electric provide a handy auto-generated config snippet specifically for OpenWRT, so it was a simple matter of:

  • installing the 6in4 package - opkg install 6in4
  • updating my /etc/config/network file with the supplied config
  • restarting the network with /etc/init.d/network restart

For reference, my network config looks something like this:

config interface 'wan6'
	option proto 6in4
	option peeraddr  ''
	option ip6addr   '2001:470:aaaa:467::2/64'
	option ip6prefix '2001:470:bbbb:467::/64'
	option tunnelid  '12341234'
	option username  'aaaabbbb'
        option updatekey 'xxxxxxxxxxxx'
	option dns '2001:470:20::2'

LAN interface

The next important step is to decide how you want IP addressing to work on your LAN. IPv6 address assignment can be done in 3 ways:

RA Only

In this mode clients get all their address info using NDP (neighbour discovery protocol). Thanks to RFC6106 RA can also contain DNS resolver information so, if that’s all you need, then a DHCP server may not be required.

RA with DHCPv6 (default mode for OpenWRT)

In this mode clients are get their primary address info via the RA, but are told to try DHCP for additional config.

NOTE: If you use this mode, then you need to ensure you have a working DHCP server aswell. Clients will attempt to solicit a DHCP address, and if the server is not running or not configured correctly then the client won’t configure properly. It seems obvious now, but this did cause me some confusion at first when my client was failing to configure due to my DHCP server being disabled

DHCPv6 only

In this mode clients are told to get all their address config from the DHCP server.

OpenWRT ‘Barrier Breaker’ uses the odhcpd process to manage both RA (router advertisements) and DHCPv6. It takes it’s config from /etc/config/dhcp. By default, my ‘lan’ config looked like this:

config dhcp 'lan'
	option interface 'lan'
	option start '100'
	option limit '150'
	option leasetime '12h'
	option dhcpv6 'server'
	option ra 'server'
	option ra_management '1'

The address assignment mode is specified by the setting: ra_managment:

  • 0: RA only
  • 1: RA with DHCP
  • 2: DHCP only

I have no need for a DHCPv6 server on my LAN so I set option ra_management '0' and disabled the DHCPv6 server with option dhcpv6 'disabled'

Configuring my client

I run Fedora 21 Linux on my desktop which supports IPv6 out of the box. NetworkManager can be configured in ‘Automatic’ or ‘Automatic, DHCP only’ modes. I just had to ensure that it was set to ‘Automatic’ and everything just worked.

Something to keep in mind with Linux clients is that, by default, router advertisements will be ignored on any interface that is forwarding traffic (routing). If you’re running Docker, then this is relevant to you! See this post for more information.

20 Jan 2015, 04:00

AngularJS: Form Validation

A common scenario when validating form input is to call back to the server to check some detail or other before the final submission. For example, where the user has been asked to select a username, we might choose to verify that the username is available ahead of time.

Rather than creating scope variables to keep track of whether or not a form is valid, we are better off using the built-in validation facility that AngularJS provides out of the box!

One powerful feature is the ability to set custom error conditions on a form field (in additon to minlength, required etc.). The following code snippets provide an example of how this can be used:


<form name="form" novalidate ng-submit="ConfirmAccount()">
	<p ng-show="form.$dirty && form.username.$error.conflict">
		Username is not available. Please try another one.
	<input type="text" ng-focus="form.$setPristine()" name="username" ng-model="confirm.username">

form.username.$error.conflict will exist and be set to true by our controller once the username check has been performed.


$scope.ConfirmAccount = function() {
	checkUserAvailable(user.username).then(function(result) {
		$scope.form.username.$setValidity("conflict", result.usernameAvailable);
		if( $scope.form.$invalid ) { return; }

		// proceed with form logic

If the server returns result.usernameAvailable == false then the validity of form.username will be false, and our error message will be displayed. The "conflict" key is an aribtary label that I’ve chosen to indicate when the username is already taken, but you can choose anything you like.

06 Jan 2015, 00:20

Go: too many open files

Recently while creating a basic HTTP/HTTPS monitoring app, Pingo2, I started seeing too many open files error. This error was thrown after the app had been running for some time, and I attempted to open a new network connection.

Of course, in Unix/Linux network sockets are just files, so this error message actually makes sense in that context. First thing to do was run lsof to see exactly which files the process had open:

lsof | grep 19991
myapp    19991 20009 monitoring   38u     IPv4            4685252       0t0        TCP dev.example.com:44449->foobar:https (ESTABLISHED)
myapp    19991 20009 monitoring   39u     IPv4            4685250       0t0        TCP dev.example.com:45459->xxx.xxx.189.184:https (ESTABLISHED)
myapp    19991 20009 monitoring   40u     IPv4            4685251       0t0        TCP dev.example.com:45460->xxx.xxx.189.184:https (ESTABLISHED)
myapp    19991 20009 monitoring   41u     IPv4            4685253       0t0        TCP dev.example.com:44450->foobar:https (ESTABLISHED)
myapp    19991 20009 monitoring   42u     IPv4            4685268       0t0        TCP dev.example.com:44454->foobar:https (ESTABLISHED)
myapp    19991 20009 monitoring   43u     IPv4            4685266       0t0        TCP dev.example.com:45464->xxx.xxx.189.184:https (ESTABLISH

So, lots of ESTABLISHED network connections, each one corresponding with a HTTP connection my program had initiated. Now I was explicitly closing the HTTP response body after each connection, and thought that was sufficient for the connection to close down by itself. However it turns out that the default HTTP transport has TCP keep-alives enabled. The TCP connections were piling up in the background as a result.

Creating a custom http.Transport for the HTTP client with DisableKeepAlives: true fixed the issue.

11 Aug 2014, 05:28

AWS: Custom Centos Image

I recently had a need to deploy a t2.micro instance on EC2 running Centos. Unfortunately, there are no official Centos AMIs available that will run on the newer HVM instance types.

The AWS marketplace has several 3rd party Centos AMIs that support HVM. I used one of these as a basis for the new install.

Centos has the ability to boot up into a VNC server from which a network install can be done. Using this facility, I was able to create the custom install as follows:

  1. Launch HVM Centos instance (using AMI from marketplace)
  2. Insert into grub a new entry which will boot the VNC server
  3. Install new grub and reboot
  4. Connect via VNC to TCP port 5901 and proceed with normal Centos install
  5. ssh to new install and comment out the mac address HWADDR= from /etc/sysconfig/network-interfaces/ifcfg-eth0
  6. Create an image from the instance

The grub entry I used is as follows:

title Centos Install (PXE)
root (hd0,0)
kernel /boot/vmlinuz.cent.pxe vnc vncpassword=xxxxxxxxxxxx headless ip=dhcp ksdevice=eth0 method=http://mirror.centos.org/centos-6/6.5/os/x86_64/ lang=en_US keymap=us xen_blkfront.sda_is_xvda=1
initrd /boot/initrd.img.cent.pxe

This needs to go before your existing boot entry. Replace vncpassword value with your own. Note the xen_blkfront.sda_is_xvda=1 - this is required so the Centos installer can map the correct device name for your block device.

To apply the new config, run the following commands:

# grub
grub> device (hd0) /dev/xvda
grub> root (hd0,0)
grub> setup (hd0)
grub> quit

Thanks to this post on Sentris.com for the PXE boot idea.

22 Jun 2014, 09:29

CoreOS install to a VPS

I’ve just spun up my first install of CoreOS. I found the process a little confusing at times as the doco isn’t terribly clear in places. CoreOS is a work in progress, so doco will improve I’m sure. In the meantime, hopefully this post will be of some help to others.

The host machine I used was a standard VPS from my hosting provider running on top of KVM. My hosting provider provides a console facility using NoVNC and the ability to attach bootable ISO media.

ISO Boot

Using the supplied ISO from CoreOS, boot the machine. You will end up at a shell prompt, logged in as user core. At this point, you’re simply running the LiveCD and nothing has been installed to disk yet (something the doco does not make clear!)

In my case the network had not yet been configured, so I needed to do that manually as follows:

sudo ifconfig <network port> <ip address> netmask <netmask>
sudo route add default gw <default gateway IP>

Add to /etc/resolv.conf your nameserver IP. I used Google’s e.g. nameserver

Config file

Once network is configured, the next thing to do is grab a config file which will be used each time your new CoreOS installation boots from disk. On another host, reachable via the network, I created the following file named cloud-config.yml:


hostname: myhostname

    addr: $private_ipv4:4001
    peer-addr: $private_ipv4:7001
    - name: etcd.service
      command: start
    - name: fleet.service
      command: start
    - name: static.network
      content: |

  - name: core
      - ssh-rsa AAAA<rest of ssh key goes here>
  - groups:
      - sudo
      - docker

Update the hostname, discovery:, [Network] and ssh-rsa sections to suit yourself.

IMPORTANT: be sure to run your config file through a YAML parser to check for any silly errors. For example, I accidentally left off a - in front of one of the keys which caused the entire config to fail to load!


  1. copy the config file to the CoreOS host e.g. wget http://externalhost/cloud-config.yml
  2. now install CoreOS to the local disk with the following command:
coreos-install -d /dev/vda -c cloud-config.yml

Replace /dev/vda with your device name and cloud-config.yml with your config file name. The install only takes about 30 seconds. Once finished, unmount the ISO media and reboot your machine.

Once booted you’ll arrive at a login prompt. If your config was loaded successfully, you should see the IP address and hostname (you specified in the config) listed just above the login prompt. You should also be able to SSH in (using the SSH key supplied in the config) e.g. ssh core@x.x.x.x


By default, CoreOS boots up with a completely open firewall policy. In most cases this is fine as your host’s management interface would be isolated from the wider network. In my case, using a public VPS, I needed to configure some basic iptables rules.

This was done by adding the following additional unit to cloud-config.yml:

- name: iptables.service
      command: start
      content: |
        Description=Packet Filtering Framework
        ExecStart=/usr/sbin/iptables-restore /etc/iptables.rules ; /usr/sbin/ip6tables-restore /etc/ip6tables.rules
        ExecReload=/usr/sbin/iptables-restore /etc/iptables.rules ; /usr/sbin/ip6tables-restore /etc/ip6tables.rules
        ExecStop=/usr/sbin/iptables --flush;/usr/sbin/ip6tables --flush

I then created files /etc/iptables.rules and /etc/ip6tables.rules containing appropriate rulesets. These are applied every time the host boots.

(Thanks to this Github gist for the idea)


If, for some reason, your config doesn’t load:

  1. reboot using the ISO media
  2. mount the ninth partition on the disk e.g. sudo mount /dev/vda9 /mnt. (to view all partitions on the disk you can use sudo parted /dev/vda print)
  3. use journalctl to view the boot messages, looking for any errors associated with the config file created earlier e.g. journalctl -D /mnt/var/log/journal | grep cloud
  4. Edit the file /mnt/var/lib/coreos-install/user_data and make any modifications required
  5. Unmount ISO media and reboot

05 Apr 2014, 01:43

Wake your Linux box

I have one of the original Asus Eee PCs - the 701. I’ve put it to work monitoring the solar array on top of my house.

There’s no point in it running all night when the solar panels are idle, so it may aswell sleep too.

I found an article on the MythTV wiki which goes into great detail about auto shutdown / wakeup using ACPI….unfortunately none of it worked for me!?

Instead, I found a far simpler method using the rtcwake utility. I’ve set a cronjob that runs the following command (as root) at 6pm:

rtcwake -m mem -s 43200

The EeePc dutifully goes to sleep and wakes up at 6am ready to start monitoring again.

26 Mar 2014, 06:33

AngularJS + Martini: html5mode

By default AngularJS displays URL paths prefixed with a # symbol. This enables backwards compatibility with browsers that don’t support HTML5 history API. The AngularJS guide explains this in detail here

To remove the # symbol and display more normal-looking URLs requires the use of html5mode in AngularJS. This is enabled via $locationProvider, for example:

app.config(['$routeProvider', '$locationProvider',
       function($routeProvider, $locationProvider) {


               when('/signin', {
               templateUrl: 'components/signin.html',
               controller: 'SigninCtrl'
               when('/', {
               templateUrl: 'components/home.html',
               controller: 'HomeCtrl'
               redirectTo: '/' 

If the client starts browsing at / then this will work fine. The webserver sends index.html to the client and AngularJS can load itself up ready to handle the other routes itself. A problem occurs if the client starts browsing directly to a path that the webserver may not know about, for example /signin. When the web server receives that request it will return a HTTP 404 error. What we need to do instead is tell our server to serve the contents of index.html. Note, we are not performing a HTTP redirect here.

I’m using Go to serve my static content, and using the Martini web framework to make life a bit easier.

To have Martini to do the necessary rewrite, perform the following setup:

		router := martini.NewRouter()
        router.NotFound(func(w http.ResponseWriter, r *http.Request) {
                // Only rewrite paths *not* containing filenames
                if path.Ext(r.URL.Path) == "" {
                        http.ServeFile(w, r, "public/index.html")
                } else {
                        w.Write([]byte("404 page not found"))
        m := martini.New()

Change public to suit whichever folder your web content lives in.

15 Mar 2014, 06:15

Fedora Linux as a DAW pt1

I recently acquired an Alesis iO2 Portable, 2-channel USB audio interface for hooking up my MIDI keyboard to a soft synth. I had been using my onboard sound card but was finding that latency was unacceptably high.

Linux is a rather unique beast when it comes to audio (as with many things) and to the uninitiated can be quite a bewildering experience. Choosing Linux as the basis of a DAW is a bit of tradeoff. Compared with Windows, Linux is a higher performance, more stable platform for audio. Compared with Mac it’s a cheaper platform (you’re not paying for Mac hardware!). On the downside is a steeper learning curve that comes with the Linux environment - you have to get your hands dirty with Linux, there’s just no 2 ways about it.


ALSA sits on top of Linux and is the interface that all other software interacts with the sound hardware. There is no configuration to be done here.

Jack + Pulseaudio

This where things can start to get a bit confusing. You’ll often have both of these on the same system and both seemingly doing a similar job. Why?

If you just want to run a desktop, play some MP3s etc then Pulseaudio is fine. When you need to do some heavy lifting, Jack comes in offering higher performance and much more fine-grained control over how the audio is routed.

Both can co-exist together, however both cannot use the same sound card simultaneously. Unfortunately, you won’t get a helpful message popping up to tell you there is a conflict (as you might with Windows) - you just won’t get any sound out of one, or both!

For me, I just keep Pulseaudio using my onboard sound card and Jack using the io2. Seems to work quite well.


That’s ALSA to Jack MIDI daemon. The hardware presents it’s MIDI interface to ALSA but you need a way to get that over to Jack. Apparently Jack can’t do this itself, so that’s where a2jmidid comes in.

Fluidsynth + Qsynth

Fluidsynth is the soft synth and Qsynth is it’s frontend. These come with a General MIDI library of SF2 sounds. The piano sound isn’t too bad.


Fortunately the iO2 shares the same class driver as many other similar USB audio interfaces which means no drivers need to be installed. You simply plug it in and you’re ready to go. Performing a lsusb at the command prompt gave me a list of connected USB devices, including the iO2 - simply named Alesis


  • Install Jack and it’s GUI tool qjackctl (and their dependencies): yum install jack-audio-connection-kit qjackctl
  • Add your user to the jackusers group sudo usermod -a -G jackuser <username>

  • Fire up qjackctl - either from the command line or click the icon that should have been created in Gnome.


  • Click Setup

The following screenshot shows the areas I had to modify highlighted in red:

Jack Settings

The Server Prefix of pasuspender -- jackd suspends Pulseaudio while Jack is running.

Frame/Period: contributes to the overall buffer size. A smaller value typically translates to lower latency. Setting this too low can result in pops and crackles in the audio, so some experimentation may be required.

Sample Rate: is self-explanatory. It defaulted to 48000 but 44100 is fine for me

Interface: it’s best to select your sound card explicitly rather than going with an auto detect option.

The following config starts and stops a2jmidid when qjackctl starts and stops.

Jack Settings 2


  • Install Fluidsynth and it’s GUI tool qsynth (and their dependencies):

    yum install fluidsynth qsynth
  • Fire up qsynth

  • In Setup select jack as the MIDI driver

Connecting it up

  • Back in Jack, click on Connect
  • Make sure your connections look something like: Connect - Audio This shows Qsynth’s audio output going to the system’s main input Connect - MIDI And the iO2’s MIDI input going to Qsynth


For playing piano/synth realtime with acceptable latency, I found that I needed to reduce Jack’s buffer size to 64 frames. Unfortunately, with a stock Linux kernel this low setting resulted in lots of pops and crackles in the audio due to buffer overuns. After installing a realtime kernel from Planet CCRMA I found this improved dramatically.

Further Reading


11 Mar 2014, 07:54

Gorp, sql.NullString and JSON

Using the Gorp package provides the ability for a struct to be populated directly from an SQL backend. Go also provides the ability for this same struct to be populated directly from JSON data. It’s a nice combination but has some gotchas to watch out for.

Recently I struck a problem where a column in my database that was NULL was causing me some grief. I was unsure at first how to handle the database constraints while continuing to be able to unmarshall JSON the way I had been.

To illustrate, I have the following struct which maps to a table in my Postgresql database:

type User struct {
        Id       int          `db:"id"`
        Username string       `db:"username"`
        Passhash string       `db:"passhash"`
        Email    string       `db:"email"`

Database columns id,username andpasshash are all defined as NOT NULL however email isn’t.

If I attempt to retrieve a record from the database, I see the following error:

err sql: Scan error on column index 3: unsupported driver -> Scan pair: <nil> -> *string

To rectify this I need to set Email to type sql.NullString. I can now retrieve data from the database without error. However if I attempt to unmarshall some JSON into this struct, I see this error:

json: cannot unmarshal string into Go value of type sql.NullString

What to do?

The answer is to wrap sql.NullString inside a new type, then implement the json.Unmarshaler interface. You might think to try something like:

type NullString sql.NullString

type User struct {
        Id       int          `db:"id"`
        Username string       `db:"username"`
        Passhash string       `db:"passhash"`
        Email    NullString   `db:"email"`

func (s *NullString) UnmarshalJSON(data []byte) (error) {
        s.String = strings.Trim(string(data),`"`)
        s.Valid = true
        return nil

However when I attempt to retrieve a record from the database I see a new error:

sql: Scan error on column index 3: unsupported driver -> Scan pair: []uint8 -> *db.NullString

The trick is to change the NullString definition slightly like so:

type NullString struct {

sql.NullString is acting as an anonymous or embedded field.

11 Mar 2014, 04:05

Angularjs: form data

There are several possible ways to submit form data to a web server:

  • urlencoded parameter string appended to the URL e.g ?param1=foo&param2=bar
  • urlencoded parameter string contained in the body of the request, and header set to Content-Type: application/x-www-form-urlencoded
  • multipart form data contained in the body of the request, and header set to Content-Type: multipart/form-data; boundary=...
  • JSON encoded string contained in the body of the request, and header set to Content-Type: application/json;charset=utf-8

The last one is the method that AngularJS’s $http service uses by default when POSTing data payloads to the server. To override that you need to be a bit more explicit when calling $http e.g.

    url: "/signin",
    method: 'POST',
    data: encodeURIComponent("username="+username+"&password="+password),
    headers: { 'Content-Type': 'application/x-www-form-urlencoded' }
}).then(function () { console.log("Success"); }, 
         function () { console.log("Failure"); }

Note the use of the builtin Javascript function encodeURIComponent to encode the string.

04 Mar 2014, 22:44

Eonon D2107 Car Stereo Review

I’ve recently replaced the OEM head unit from my 2007 Subaru Forester with an Eonon D2107 head unit. The D2107 is a 2 DIN unit with an 800x480 pixel touch screen, CD/DVD drive, USB + iPod support and Bluetooth.

There are several comparable models out there, but I ended up going with the D2107 mainly because it has rotary knobs for volume control and navigation.

What follows is certainly not a comprehensive review, but just highlights some things that may be of interest to others.


Installation was relatively straight-forward. I purchased an adaptor plug from Ebay to convert from the Eonon’s ISO plug to the Subaru’s wiring harness which saved a lot of soldering, but not all - I still had to solder the illumination wire and the ‘brake’ wire.

I had to widen the hole in the dashboard in order to accomodate the unit’s facia.


When you switch the headlights on, the display dims (it dims very slightly and not enough in my opinion) and the buttons are also illuminated by LEDs so you can see them in the dark. This does not work unless you connect the illumination wire (marked ‘ILL’, indicated by green arrow in picture below) to the corresponding wire from your car’s harness. Eonon could have easily included this in the ISO plug but for some reason haven’t, so you have to connect that manually.

Video Display

By default, playing a video or viewing a still image (including from DVD or USB drive) causes the display to show a blue screen with a warning message. This happens even when the car is in park! To get around this, simply join the ‘brake’ wire to the ground wire.


File formats

One surprise for me is that this unit seems to have relatively limited support for media formats. I haven’t found any official documentation that lists what formats are supported but, by trial and error, I can say that it doesn’t support VBR MP3s (at least not at low bitrates), Ogg Vorbis or FLAC. It will play CBR MP3s and MPEG4 (AAC audio)

User Interface

The UI does have some nice aspects and is certainly useable. One big disappointment however is music playback - the interface is pretty bad. They have 800x480 pixels to play with and yet they’ve decided to cram the music list into a small box in the middle of the screen which is hard to read and even harder to navigate. There is no ‘home’ button; you have to iterate up through the directories to get to the root of the filesystem. There is no way to jump to a point in a song, you can only fast forward or rewind which is very cumbersome. Fast forward and rewind doesn’t even work with some file formats!

Another minor annoyance is volume control is too fine-grained. You have to turn the volume knob a whole revolution to get a noticeable volume difference. For me that is too much.

The responsiveness of the interface is slightly laggy but passable.


The unit supports making and receiving calls and setup is simple. Unfortunately, it does not support phonebook - you have to dial the number from the phone handset or manually via the D2107’s touch screen. There are a couple of microphones built in to the front of the unit which seem to do a pretty good job (no support for external microphone). Streaming music from your phone is also supported and works quite well.

13 Feb 2014, 12:54

Evolution vs Creation - same evidence, different interpretation

It was with great interest that I observed the buzz around the recent Creation vs Evolution debate between Ken Ham and Bill Nye. This is the first time I can recall such a debate taking place where the secular media have taken more than a passing interest. The draw card of course was Bill Nye - the well liked and respected US television personality, public speaker who to many ordinary folks is the approachable face of science. What a coup for Ken Ham and his organization to have gotten Bill to agree to such a debate. Other apologists for Evolution have been vocal in their disapproval of Bill Nye’s participation claiming that it only gives Creationists and their beliefs credibility in the eyes of the public where none is due.

By all accounts, the debate was hugely popular with millions of viewers across the globe and numerous news articles, blog posts and other commentary issuing forth in the aftermath. For better or for worse, it certainly got tongues wagging and stirred up passionate support on both sides of the fence.

Both men presented well and were quite evenly matched I thought, neither being a clear ‘winner’ as I’ve seen in other similar debates.

One complaint would be that both speakers ‘spoke past’ each other quite a bit. Bill Nye kept referring to Creationism as belonging to Ken Ham, completely failing to recognise or acknowledge the many high profile scientists, both past and present, who are young earth Creationists. Bill Nye also made several claims about Creationism (and Christianity more broadly) that were quite flimsy.

Dr. R. Albert Mohler Jr, president of The Southern Baptist Theological Seminary, wrote a very insightful review of the debate. One quote worthy of note:

Both men were asked if any evidence could ever force them to change their basic understanding. Ham said no, pointing to the authority of Scripture. Nye said that evidence for creation would change his mind. But Nye made clear that he was unconditionally committed to a naturalistic worldview, which would make such evidence impossible.  Neither man is actually willing to allow for any dispositive evidence to change his mind. Both operate in basically closed intellectual systems. The main problem is that Ken Ham knows this to be the case, but Bill Nye apparently does not.

The debate can be seen here:



06 Nov 2013, 16:51


Another truly beautiful trance track by Norwegian artist Malu

Shimmering pacific blue waters

20 Oct 2013, 20:50

Openvswitch and Fedora 19

I’ve just setup my Fedora19 to use Openvswitch . There are many howtos out there but the ones I read either didn’t cater for RHEL/Fedora or were not reboot safe.

My aim was to create a bridge interface with 2 member interfaces:_ an interface for my host _ IP (mgmt0) and the physical NIC (em1). Later, my VMs will also connect to the same virtual switch.

  1. Install Openvswitch: $ yum install openvswitch -y
  2. Start openvswitch serve and enable it at boot time:

    $ systemctl enable openvswitch.service
    $ systemctl start openvswitch.service
  3. Create network interface config for my bridge interface: /etc/sysconfig/network-scripts/ifcfg-ovsbr0

  4. Create a network interface for the host interface: /etc/sysconfig/network-scripts/ifcfg-mgmt0


    Note: the last 2 options were necessary for the interface to get DHCP at boot time

  5. Create network interface config for my physical interface: /etc/sysconfig/network-scripts/ifcfg-em1

  6. Make sure the bridge module is not loaded (this interferes with openvswitch) $ rmmod bridge

  7. Load the network configuration: $ systemctl restart network.service

That’s it! Interface mgmt0 was then able to get an IP address via DHCP, even after a reboot.

$ ip addr  
1: lo: &lt;LOOPBACK,UP,LOWER_UP&gt; mtu 65536 qdisc noqueue state UNKNOWN   
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00  
    inet scope host lo  
       valid_lft forever preferred_lft forever  
    inet6 ::1/128 scope host   
       valid_lft forever preferred_lft forever  
2: <b>em1</b>: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc pfifo_fast state UP qlen 1000  
    link/ether 18:03:73:23:50:e7 brd ff:ff:ff:ff:ff:ff  
    inet6 fe80::1a03:73ff:fe23:50e7/64 scope link   
       valid_lft forever preferred_lft forever  
7: ovs-system: &lt;BROADCAST,MULTICAST&gt; mtu 1500 qdisc noop state DOWN   
    link/ether 2e:b8:2c:b4:6c:d3 brd ff:ff:ff:ff:ff:ff  
14: **ovsbr0**: &lt;BROADCAST,UP,LOWER_UP&gt; mtu 1500 qdisc noqueue state UNKNOWN   
    link/ether 18:03:73:23:50:e7 brd ff:ff:ff:ff:ff:ff  
    inet6 fe80::ec1f:16ff:feaa:67d7/64 scope link   
       valid_lft forever preferred_lft forever  
18: <b>mgmt0</b>: &lt;BROADCAST,UP,LOWER_UP&gt; mtu 1500 qdisc noqueue state UNKNOWN   
    link/ether 76:4d:2b:ec:a2:38 brd ff:ff:ff:ff:ff:ff  
    inet brd scope global mgmt0  
       valid_lft forever preferred_lft forever  
    inet6 fe80::744d:2bff:feec:a238/64 scope link   
       valid_lft forever preferred_lft forever

Something else I do that others may find useful when using DHCP is to define a list of domain names that are part of the domain-search when hosts get resolved.

  1. Create the following file /etc/dhcp/dhclient-mgmt0.conf

    interface "mgmt0" {  
    append domain-search "domain1.com";  
    append domain-search "domain2.com";  
    append domain-search "domain3.com";  

_[ UPDATE: Bugzilla bug 1027440 has been raised to address an issue with DHCP not being applied for OVS internal ports. ]_

20 Oct 2013, 14:13

The stupidity of TFL

Transport for London take money from you even when you DON’T use their service. It has happened to me twice now. The first time I went to catch the tube at peak hour, walked down to the platform, watched as 3-4 full trains went past, then walked out again. I paid several pounds for the privilege. On the second occasion, the particular line I wanted to catch was shut for maintenance. There were no signs informing me of this until I’d already passed through the electronic gate. When I did see the sign, I turned on my heel and tagged straight off. Again, I was charged several pounds.

Is this boneheaded incompetence on the part of the those who designed the electronic tagging system, or a cynical way of dishonestly extracting more money from honest patrons such as myself? I suspect the latter.

The system knows that I tagged on only moments before AND that I’m tagging off at the SAME STATION….there is simply no good reason for not refunding me on the spot! I’m using an Oyster card which means that I have an account into which the refund can be automatically deposited. Instead of this, I’m told I have to call a hotline or visit a ticket office (which are scarce) to get a refund. TFL know full well that of the people that realise they’ve been charged, very few will bother to seek a refund, and it wouldn’t surprise me if their end-of-year accounts have a special category for this very thing - perhaps with the subtitle: ‘executive Christmas party fund

On the other hand, that eponymous adage comes to mind:

Never attribute to malice that which is adequately explained by stupidity.

15 Sep 2013, 13:38

Dishonest statistics

Reading the latest Harpers Index I came across these 2 little gems:

  • Year in which the Iowa Supreme Court legalized same-sex marriage : 2009
  • Portion of Iowans who say the decision has had either no impact or a positive impact on their lives : 34

The intention being, of course, that the reader will conclude that a high proportion of the Iowan population agree with changing the definition of marriage, and if so, why has Iowa held out for so long? Well it must be all those conservative lobby groups (the inbred cousins of the British swivel-eyed loons) holding back progress!

To say that somebody’s indifference to something puts them in the same group as those in favour is simply dishonest. If anything, those respondents should be counted in the same group as those in favour of not changing legislation i.e. maintaining the status-quo, doing nothing!

The reality is that the number of people who feel strongly that marriage should be redefined in law is a tiny proportion of society. A statistic that I suspect you’ll never see published in a list like the Harpers Index.

06 Sep 2013, 07:22

Node.js vs Go

Following on from my previous post on Go I thought I’d write some thoughts down on why I think Go has an edge over its competitors, in particular Node.js

I’m not so interested in raw performance comparisons, as for many if not most real-world use cases these metrics are not particularly relevant. What interests me more are aspects such as developer productivity, code maintainability, scalability

For several years now, Node.js has been filling the role of a low barrier for entry tool for rapid development of networking apps. It is well documented and has good cross-platform support. Being built on Javascript has made it accessible and provided a bridge from the front end devs to explore the world of backend systems programming. It’s been wildly popular and I for one have thoroughly enjoyed the ride, however I do think Node.js has peaked.

I’ve written several non-trivial apps in Node.js and while I found it fun and interesting, I can’t bring myself to have a great deal of confidence in the finished product. It just feels unsafe. Javascript is a pretty gnarly language, carrying a lot of baggage from years of being pulled this way and that. Combine that with the async programming model which has a tendency to get you mentally tied up in knots, and it just seems like unexpected behaviour will be the inevitable outcome. Debugging Node.js can be a bitch. It is also very easy to write code that you come back to in 6mths and struggle to figure out what it’s meant to do. It may be possible to avoid these pitfalls, but the fact that they exist at all and bite most people at least a few times, is a sign of a more fundamental problem.

On the topic of scalability, Node.js is tied to a single threaded event loop. This is fine when you have a lot of IO wait and can hop between concurrent tasks, but as soon as you hit a task that chews CPU, all other tasks effectively stop dead. The solution is to spread the tasks across multiple Node.js instances, and that means forking. Setting up and tearing down a new process carries a not insignificant overhead penalty, and so is better suited to a few long-running processes than many short-lived processes. Load balancing of shared socket resource - for example, incoming TCP port connections - is done by the OS in a round-robin fashion; while probably fine for most scenarios, is a limitation none-the-less. To summarize: Node.js can be made to scale, however it requires careful planning and potentially quite a bit of extra overhead in terms of lines of code and system resources.

I feel that Go solves these problems while, importantly, remaining accessible and approachable for both newcomers and old-hands looking for a fresh approach.

04 Sep 2013, 19:18


Another killer track to lift your soul: Spellbound by Darren Porter.

I love that swelling choir sound with the bass drum underneath, reminiscent of an Enya track or the Gladiator film score.

04 Sep 2013, 19:17

Flying Blue

I enjoy listening to electronic dance music; more specifically, the sub genres Uplifting Trance or Progressive Trance. Trance tends to be packaged up into long mix tracks which transition from one song to the next in such a way that, when played at background levels, you’re often not aware of the change - just a continual stream of sound. I often just have it playing this way, not paying much attention to it but, very occasionally, music will play that grabs my attention.

One such track I stumbled upon recently is Flying Blue by Daniel Kandi & Ferry Tayle

I hope you enjoy it as much as I did.

28 Aug 2013, 20:35

And the winner is, Go!

I’ve just started playing around with Google’s language, Go (informally Golang). I’ve quickly become a fan and am already looking for opportunities to use it in a real-world application.

I’d heard about Go when it first came out but never really bothered to look into it until now. Since I’ve been learning more about it, I’ve had several of those aha! moments where I’ve felt like a an itch has just been scratched; something that’s bugged me about other languages has been implemented the right way with Go.

Go is very C-like, and programming in Go it’s easy at first to write C code accidentally without thinking. Go reaches far beyond C however, for example, in its use of methods and interfaces. Another big difference is in memory management, being more akin to Java with its garbage collection.

Features of Go that appeal to me in particular are:

  • Simplicity of the ecosystem. Everything is self-contained. You download a single binary tarball for your platform, unpack it somewhere convenient and just start using it. There is one tool, ‘go’, which does practically everything you need - run, build, install etc. Makefiles, you don’t need ‘em!
  • Extensive standard library that is designed to accommodate modern systems programming.  Knowing that the same, powerful set of tools is available everywhere, straight out of the box, is a big plus.
  • Statically linked binaries For systems programming this is fantastic. I can code something up on one box and deploy the resulting binary across my entire estate of varying age and flavour of Linux distros without having to worry about library dependencies - something that is quite problematic for me with C or Python
  • stable, well thought out language spec and library API. Having the direction and focus of full-time Google engineers behind the language really shows in this regard. The Go team have released v1 of the spec and have publicly stated that it will stay pretty much as-is for quite some time. This is good news indeed.
  • concurrency. Systems programming involves connecting lots of IO pieces together , and concurrency is needed to keep all the balls in the air at once. The Go team have baked this into the language where it should be via Goroutines and channels.

26 Jun 2013, 07:10

RHCE Certification

I just became recertified as a ‘Redhat Certified Engineer’ . I held the certification up to RHEL5, however the cert expired once RHEL6 came out. I’ve been doing a senior sysadmin role for quite a few years now and so employment-wise, having the certification is perhaps less relevant, but felt it would be good to test myself out to make sure I still ‘had what it takes’.

Due to a non-disclosure agreement every exam taker has to sign, I won’t be discussing details of the exam but just giving some general thoughts.

Due to my prior experience, I didn’t feel a instructor-led course was necessary. I took the course the first time around and would highly recommend it to anyone who is either new to Linux or doesn’t enjoy self-paced study. I used Michael Jang’s book ‘Practice Exams: RHCS/RHCE’ and found that to be a perfectly adequate preparation. I put aside 2 days to go through all the material and found that to be just enough time - I certainly wouldn’t want any less time to prepare!

It’s absolutely critical that you have a machine that you can install RHEL from scratch, and re-install if necessary. I setup my laptop with KVM + Libvirtd and spawned several VMs for this purpose however you could get by with 2 physical machines: one for general Internet access (Google etc) and the other for hacking on. The study book comes with a DVD containing 3 pre-built VMs which you can also use. Also, having an actual copy of RHEL is not necessary - Centos or Scientific Linux will suffice.

Redhat require that you have a current certification in the lesser RHCSA before being entitled to the RHCE. I opted to do both exams on the same day - RHCSA (EX200) in the morning (2.5hrs) and RHCE (EX300) in the afternoon (2hrs). I found the RHCSA to be quite straight forward. I had finished and reviewed my answers with still about 15mins to spare. The RHCE on the other hand was much more pressured time-wise and was working right up until the final moment. Leaving the exam room, I was really unsure whether or not I’d passed the RHCE as I was sure I’d fluffed at least a couple of the questions.

The exam results are not given straightaway, and can take up to 3 business days. I got my RHCSA result the same day - 100% woot!!!, but waited another 2 days for the RHCE result - 86%. The result is simply a score out of 300. There is no breakdown of what you got right or wrong, which is a little frustrating.

On a final note, I really think Redhat should re-consider the rules around re-certification for RHCEs. If you’ve held an RHCE before and work as a Linux professional, the requirement for re-certification should be relaxed. The candidate should only have to sit the actual RHCE exam itself rather than both exams. Ideally, if someone has held an RHCE before, they should not be required to re-certify before going for the higher certifications i.e. RHCA

21 Apr 2013, 14:45


What does it mean to act professionally in the modern work place? A lot of the frustrations I have had with my work colleagues over the years boils down to this question. Mostly, it’s just about being a decent human being - not high-flying corporate antics.

What follows is what I’d say to younger version of me, or anyone I had the opportunity to mentor:

  • Keep your word
    If you’ve said you’ll do something, then make sure you make every effort to make it happen. If after that, you still aren’t able to deliver - that’s OK, let the person know and offer some kind of compensation or compromise if you can. Depending on the circumstance, they may not take the news well - be prepared for that, and ready to take it on the chin graciously. Whatever you do, DO NOT try to wriggle out of your commitment to that other person without some kind of direct communication with them: no lame excuses, ‘oh, I forgot’ etc. Even if they never mention it again, it’s unlikely they forgot about it and quite likely have you flagged as being an unreliable and/or dishonest person.

  • Speak positively / constructively
    As the old saying goes: ‘if you’ve got nothing good to say, don’t say anything at all’. Take every opportunity to encourage others and speak positively about what’s happening. There will never be a time when everything is bad, so focus on the positives. If there is something that is really bugging you and you need to offload, then find someone you trust and respect and tell them discreetly but don’t air your dirty laundry for all to see. It’s damaging to morale, and may end up getting you branded as an ungrateful whiner. There will be times when you can’t avoid discussing a topic that you feel very negatively about. In those moments, draw a deep breath, say how feel as calmly and objectively as possible, and follow-up with a suggestion for improvement.

  • Ask for help
    Don’t be someone who’s either too afraid to speak up or too proud to ask someone else for help. People who never ask questions, are people who have stopped growing. There will always be someone who knows more than you on a particular topic - make use of them, harvest their knowledge! Don’t be afraid that you’ll look incompetent. Providing that you’re not asking the same question over and over, nobody will resent you for it and most are happy to help and will respect your honesty. Asking questions also makes you more ‘real’ and approachable - you may find people being more open with you about other things, which in turn could create unexpected opportunities.

  • Courtesy and Respect
    What do these two things have to do with professionalism - everything!

    • look people in the eye and shake their hand warmly
    • thank them for their time (and anything else they’ve given you)
    • don’t interrupt, or jump straight in at the slightest pause. Listen to what they have to say and respond to it in a way that indicates that you’ve heard and understood.
    • if someone interrupts you while you’re in the middle of doing something, don’t ignore them or snap at them. Look at them, say something like: ‘I’m sorry, I can’t speak to you just now. Can I come and see you when I’m finished?’
    • Respect yourself. Don’t be a pushover: set boundaries and tell people firmly and directly when those lines have been crossed. Providing you communicate calmly and objectively, people will usually respond well and back off.
      Treat everyone with a basic level of respect and courtesy - even people you can’t stand. You never know when you may need their help, and you also never know who else is watching. People notice when you’re rude to others. What goes around, comes around.

01 Apr 2013, 20:59

Things that suck about working in IT

Surfing the web, I stumbled upon one of those ‘things that suck about’ articles relating to IT. I found myself nodding and smiling as I went through the list, but it struck me when I reached the end that they’d overlooked, what is for me, the #1 thing: good manners (or the lack thereof).

It never ceases to amaze me how people forget their manners when going to IT support to ask for something, especially when that support is one level removed i.e. behind a email based ticketing system. The number of times I’ve busted my gut to help somebody out with something, only to have my reply of ‘all fixed. Please let me know if you have any more trouble’ answered with…..silence. How hard would it be for those people to simply click reply and say the (other) magic word ‘Thankyou’ ?

24 Mar 2013, 17:17

Ubuntu vs Redhat

I prefer Redhat. My work colleagues prefer Ubuntu. At work we have a mix of both distros, with perhaps a little more Ubuntu. This leads to some inevitable friction - nothing major, just little niggles from time to time. One of my colleagues, in addition to being a Ubuntu fan, is also a bit of a Redhat hater in that he likes to verbalise to me why Redhat sucks and Ubuntu is superior. This set me thinking.

My attitude to the whole thing is one of use whichever tool lets you get the job done. The truth is, for most scenarios either distro is fine and so it usually just comes down to reaching for ‘old faithful’ - it’s a comfort thing. I started out with Redhat, and have never had a good reason to stop using it. By ‘it’ I mean Redhat based distros: RHEL, Centos, Scientific Linux, Fedora. I’m used to the Redhat way of doing things. If my boss told me tomorrow that I could not use Redhat anymore, I’d be a little sad but I’d quickly get over it I’m sure.


What follows are some (very subjective) pros/cons of each:

Redhat Pros

  • Redhat, the company, have been there since the beginning and have lasted until now - they must be doing something right.
  • All the resources of a large corporate behind the distro
  • Known for long-term support, and stable release cycles
  • Innovative cutting edge distro Fedora, on which ideas can be tested and successes fed back to the mainline distro

Redhat Cons

  • Big corporate image which, for some, is a turn off
  • Pure speculation, but like any big corporate, Redhat (as a company) are going to be more inefficient, slower to move and respond than their competition. Being corporate focused could see them getting out of touch with their grassroots

Ubuntu Pros

  • tends to have a sexy ‘hacker’ image: young, fresh, innovative
  • different: comes from different origins, and does things differently to Redhat.  Variety and competition is good

Ubuntu Cons

  • just who is steering the ship there at Canonical? The company has made some odd moves over the years (Amazon ads on the dash anyone?)
    What follows are some specific implementation gripes I have with each distro:



  • Systemd - starting/stopping/configuring daemons and services on Fedora just got a whole lot harder with Systemd. I understand the architecture is better, but useability sucks balls! Please sort this out before pushing to RHEL
  • Selinux: Nice idea, but fails horribly in practice and causes no end of frustration for most mortals. This should be switched off by default


  • Installation process: stray off the beaten path and things break . What’s the deal with horrible curses interface? This needs some serious work
  • command-line apt : when installing many packages, I often haven’t a clue where it’s up to. Redhat yum is far superior at providing feedback on what’s going on
  • split config files: for example, Apache: Ubuntu ships Apache with it’s config file split into a million pieces linked back to the main file. DO NOT WANT! Give me a monolithic config file any day.
  • root password reset: I had to do this the other day - what a bunch of faffing about! Several of the howto guides simply didn’t work. I don’t recall ever having trouble with Redhat

25 Jan 2013, 21:40

Digital Spring Clean

Today I decide to ‘dedup’ my external hard disk. I tend to dump random stuff on the disk and overtime a lot of doubling up has occured so I reckoned quite a bit of room could be made.

My tool of choice was the creatively named ‘Duplicate Files Finder’ which is free/libre open-source application available for Win/Mac/Linux.

The whole process took around 20mins to run, processing 465GB of data  / 52296 files.

Once finished, there were 11872 files (22.7%!) that were duplicates, totalling 1.8GB. Unsurprisingly, most of these were small and tended to be internal application data rather than stuff I’d created directly myself.

12 Jan 2013, 06:21

Shells within shells

As a *nix admin I spend most of my time on the command line which means I want to optimise that experience as much as I can. When editing files with vim I find I often need to drop into a shell to do some tasks before resuming the edit, but rather than quit vim I just spawn a subshell. The trouble is I often forget that I’m in a subshell and go to open the same file again. A neat trick I found to stop that happening is as follows - change the shell prompt to clearly indicate that I’m in a subshell:

if [ $VIM ]; then  
    export PS1='\[\e[1;31m\]**VIM**[\u@\h \W]\$\[\e[0m\] '  
  • exit from your terminal session, then log back in again
    Now when I edit a file with Vim and drop to a shell, I get a prompt that looks something like:
    VIM[me@localhost tmp]$

07 Jan 2013, 09:23

Seoul layover

I just had a one night layover in Seoul Korea on my way back from Christmas holidays with family. I flew with Korean Air and normally they include accommodation in the ticket price but I was disappointed to learn that during the peak Christmas season hotel is not included. As I was having to arrange my own accommodation, I thought I’d take the opportunity to try staying in Seoul city itself and have a bit of a look around. The city is approximately 50kms from the city centre and takes about 1hr on the train. There is a special line called AREX (AiRport EXpress way) and a one-way ticket to Seoul Station on the ‘commuter’ service costs 4050 Won (£2.34) - there is also an ‘express’ service which costs a bit more. Going back to the airport the next day, I made the mistake of going to ‘Incheon’ rather than ‘Incheon International airport’ - the two places are nowhere near each other and so I wasted a bit of time backtracking.

From Seoul Station I walked  to my hotel. I was just about to buy another ticket for the subway when a Korean man came over and asked where I was going, ‘Myeong-dong’ I replied to which he said he thought I should save the money and walk as it wasn’t far and he would point me in the right direction. He ended up walking with me which I was a little uncomfortable about at first, thinking he was up to no good, but I quickly realised that he was very drunk and so not likely to be much of threat, other than getting us both completely lost! From what I could gather he was a cook but his boss wasn’t treating him properly and so he had been drinking to make himself feel better. He said his daughter lived in America and apparently he nothing better to do on a Saturday night, which made me feel sorry for him. As it turned out he took me right past my hotel which was quite an amazing coincidence as the hotel was not well sign-posted. At that point I thanked him and he just walked off. The hotel was nice enough with the room being quite small with just enough room to fit a double bed, TV etc which, I’m told, is normal.

Seoul at this time of year is FREEZING with ice and snow everywhere. Apparently it has consistently colder winters than any other city on the same latitude. I had a couple of hours the next morning to walk around the city. I found some breakfast then walked past the town hall where there is a public  ice skating rink and then up to the Gyeongbokgung palace. There were policemen stationed every few hundred metres standing in the freezing cold - not sure whether that’s normal or if there was something special happening that day? The people are polite, friendly and bow all the time which I found very endearing.

14 Dec 2012, 11:36

SOAP Server with PHP

I had a need to build a SOAP server, and after reviewing implementations in different languages decided that PHP using the Zend Framework would best suit my needs:

  • minimal code required
  • supports auto generation of WSDL by performing ‘autodiscovery’ on the PHP code
  • PHP is widely deployed and well suited to web/cgi environments
    The examples below is an adaptation of the example found on this post - extended to support the Document/Literal style

My development environment is PHP 5.3 using Zend Framework 1.12, running on Linux + Apache.

SOAP Server

/* these are required for it all to work */
$wsdlUrl = 'http://example.com/ws/?wsdl';
$serviceUrl = 'http://example.com/ws/';
class User
        /** @var string */
        public $name='';
class Response
        /** @var string */
        public $msg;
class MyService {
     * @param User $param
     * @return Response
    public function GetCoupons($param) {
        $name = $param->param->name;
        $result = new Response();
        $result->msg = "Hello $name!";
        return array('GetCouponsResult'=>$result);
// Generate WSDL relevant to code
if (isset($_GET['wsdl'])){
        $autodiscover = new Zend_Soap_AutoDiscover('Zend_Soap_Wsdl_Strategy_ArrayOfTypeComplex');
        $autodiscover->setOperationBodyStyle(array('use' => 'literal'));
        $autodiscover->setBindingStyle(array('style' => 'document'));
} else {
        $server = new Zend_Soap_Server($wsdlUrl,array('soap_version'=>SOAP_1_2));
        $server->setObject(new MyService());

Change $wsdlUrl & $serviceUrl to suit. $serviceUrl is where the client will be pointed to, so unless you’re running the client on the same host as the server then this should be changed otherwise the client will throw a connection error.

The WSDL autodiscovery takes the $param input variable and wraps that around ‘name’, thus the odd looking reference: $param->param->name . (I’d like to know if there’s a better way to do this)

SOAP Client

This client does not require Zend framework, so I’m just using PHP’s base SOAP library

 ini_set("soap.wsdl_cache_enabled", 0);
$wsdlUrl = "http://example.com/ws/?wsdl";
$client = new
                        'trace' => 1,
                        'soap_version' => SOAP_1_2
$user = array('name'=>'Bob');
$result = $client->GetCoupons(array('param'=>$user));
echo "<pre>\n";
echo "</pre>\n";   

Running the above client code should result in the following output:

stdClass Object  
    [GetCouponsResult] =&gt; stdClass Object  
            [msg] =&gt; Hello Bob!  

Final thoughts

I had a lot of trouble with PHP insisting on using the cached copy of WSDL, so I would highly recommend that you disable WSDL caching (set _soap.wsdl_cacheenabled = off in php.ini ) and remove any cache files from disk (rm  /tmp/wsdl-*) - do this on both the server and the client .

08 Dec 2012, 20:44

Shoei XR-1100 Motorcycle Helmet

Last week I purchased the Shoei XR-1100 helmet to replace my old helmet: a 4-month old Arai Axces I which had been stolen (b@$tards took a bolt cutter to my garage door). Rather than go and get the same helmet again, I thought I’d try something different, having never owned a Shoei before.

The Axces and XR-1100 are both the entry level helmet for each brand, and have similar specs - both made in Japan, both fibreglass composites - and comparable pricing (Shoei was £70 more expensive) - .

So far I’m loving the Shoei and feel it is the better helmet of the two for the following reasons:

  • Comfortable fit: I have a big head and need an XL. The Arai XL was a good fit all round except for slight pressure at the top of my forehead which would start to hurt a little after an hour or so. The Shoei in XL doesn’t hurt my forehead, but was a bit loose around my cheeks. That was easily fixed with thicker cheek liners which were fitted at the shop for no extra cost!
  • Visor: The visor on the Shoei is easier to take on/off than the Arai (which can be quite tricky), and also has a nifty locking mechanism which, when in the closed position, brings it in firmly against the helmet creating a very good seal. Both visors are able to take Pinlock inserts - Shoei included one clear insert, Arai did not include any.
  • Style: In my opinion the Shoei has a sleeker look to it. I’m told this is all in the name of science, but heck it looks sexy too!
  • Lightweight: The Shoei feels incredibly light. The Arai certainly didn’t feel heavy, but it didn’t strike me as being particularly light either in the way the Shoei does.
    In fairness to Arai, they have brought out the Axces II now which I have not tried, and it may address these issues.

24 Nov 2012, 09:11

mysterious high load

We had an issue recently where a server started reporting high load for no apparent reason. Running top on the server revealed that there was no process hogging cpu. The only other thing it could be was IO wait (kernel waiting for IO read/write operation to complete) and this most commonly relates to disk operations. _When IO is slow, processes take longer to run and tend to pile up on each other causing the overall load to rise. _

Why had IO suddenly become slow? We mount filesystems via NFS from a couple of Netapp devices, so naturally went looking at the Netapp to see whether something was up there…nothing abnormal found.

Eventually we turned to the network:ran ping tests, checked speed/duplex on the NICs - all looked fine. Then somebody had the bright idea of running tshark to analyse the traffic on the interface that connects to the storage:

 tshark -i eth1 -R tcp.analysis.retransmission

Which revealed the following:

6.846868 ->  TCP [TCP Retransmission] [TCP segment of a reassembled PDU]  
6.847041 ->  TCP [TCP Retransmission] [TCP segment of a reassembled PDU]  
6.851126 ->  NFS [TCP Fast Retransmission] V3 GETATTR Reply (Call In 1916) Regular File mode:0644 uid:27973 gid:100  
6.862840 ->  NFS [TCP Fast Retransmission] V3 ACCESS Reply (Call In 2011) ; V3 ACCESS Reply (Call In 2013)  
6.863705 ->  NFS [TCP Fast Retransmission] V3 READ Reply (Call In 1944) Len:4096

This showed us that TCP retransmissions were happening somewhere between our server and the netapp. A simple ping test hadn’t picked this up as the problem was only happening when a significant volume of traffic was passing through. Tests run from other parts of the network to/from the storage device had also failed to pick up any issue as that traffic was taking a different path.

It turned out that one of the Ethernet switches in between was trying to balancing traffic (layer2) between several uplinks and failing (due to a software bug we suspect).

Lessons Learnt

  • storage protocols such as NFS will not necesarily report problems in the underlying network (TCP retransmissions are transparent to high level protocols)
  • don’t stop at ping, use more advanced tools to get the complete picture
  • take extra care when designing and implementing the network path between servers and storage
  • make sure your network guys are monitoring layer2 network properly! :)

23 Nov 2012, 18:13

How to lock xscreensaver on suspend

I’m running Fedora 17 with Gnome3 and xscreensaver. One small problem I had is that resuming from suspend did not automatically lock the screen. Thanks to a helpful post on askubuntu.com, came up with this solution:

  • Create file /etc/pm/sleep.d/10-xscreensaver containing the following:
case "$1" in
export DISPLAY=":0"
su myuser -c "(xscreensaver-command -lock)"
  • Replace ‘myuser’ with your username. You may also need to adjust the DISPLAY variable depending on your screen setup ( ‘:0’ should work for most setups)
  • make the script executable: chmod +x /etc/pm/sleep.d/10-xscreensaver

  • On my Dell Optiplex 790 workstation, I can now hit the power button and the computer goes to sleep. Upon resume, I have a locked screen

09 May 2011, 14:09

Copying files a different way...

I’ve just been sorting through my digital music collection and found a bunch of mp3 files mixed together with other formats. As my car stereo only plays mp3 files, I wanted to copy all the mp3 files and maintain the directory heirarchy (album/artist/song). The following allows me to achieve this:

cd /inputdir; find . -name '*.mp3' -print0 | xargs -0 tar -cvf - | (cd /outputdir; tar -xvf - )