10 Feb 2018, 09:35

Wireguard VPN

I recently stumbled upon what I think may be the holy grail - a VPN method that is simple to configure, high performance, and (so I’m told) highly secure. Until now my experience of using VPNs was that you could choose any two of the above, but never expect to get all three!

The ease by which you can get this up and running is quite astonishing. The documentation is quite good, but still has a few holes which hopefully will be covered here, adapted from https://www.wireguard.com/install/

Wireguard is conceptually quite different to other VPN products in that there isn’t a daemon that runs - it all happens in the linux kernel. There also isn’t any state: no concept of a tunnel being ‘up’ or ‘down’ - just a standard network interface with configuration applied to it - not dissimilar to a wifi interface.

NOTE: Wireguard is not yet merged into mainline kernel which means compiling the required kernel module from source. Fortunately, thanks to DKMS this step is painless. From what I understand, mainline kernel support is imminent.


Both ends of the VPN described here are running stock Centos7

$ curl -Lo /etc/yum.repos.d/wireguard.repo https://copr.fedorainfracloud.org/coprs/jdoss/wireguard/repo/epel-7/jdoss-wireguard-epel-7.repo
$ yum install epel-release
$ yum install wireguard-dkms wireguard-tools


Configuring wireguard can be done from command line with ip (from the iproute package) and wg (from wireguard package) commands. I would recommend however not doing that, but instead using the included systemd service file which reads from a config file, described below.

Each endpoint has a single config file, similar to this: /etc/wireguard/wg0.conf

ListenPort = 51820
PrivateKey = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Address =

PublicKey = yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy
AllowedIPs =,,
Endpoint = x.x.x.x:51820
  • Endpoint = x.x.x.x:51820 corresponds with the public IP and listening port of your peer.
  • AllowedIPs = is set to any IPs or subnets that should be routed via this tunnel.

IPv4 routing is enabled. This is set in /etc/sysctl.conf with: net.ipv4.ip_forward = 1


The public/private key are generated using the wg utility. wg genkey generates the private key string. This string can then be piped into wg pubkey to generate a corresponding public key e.g. all-in-one command

$ wg genkey | tee /dev/tty | wg pubkey

The private key (first line) goes into local config file, and the public key goes into the peer’s config file.


Systemd can bring the VPN up/down using the included wg-quick service file. To set the VPN to come up on boot enable the service:

systemctl enable wg-quick@wg0

Now, start/stop the service like so:

systemctl start wg-quick@wg0
systemctl stop wg-quick@wg0

This adds the wg0 interface, and inserts routes corresponding with the list of allowed IPs specified in the config file.



The VPN itself uses a single UDP port. For the VPN tunnel to connect, both ends must be able to reach the other on UDP port 51280. The port number is configurable.


  • the tunnel uses the addresses for A-end and for B-end
  • routes for each end’s network(s) are sent via the VPN interface wg0


Having done all the above, if things don’t appear to be working out, here’s some things to look at first:

  • Check systemd log for the wg-quick@wg0 service: journalctl -u wg-quick@wg0
  • Check the wg0 interface is up with ip addr:
6: wg0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 8921 qdisc noqueue state UNKNOWN qlen 1
    inet scope global wg0
       valid_lft forever preferred_lft forever
    inet6 fe80::5e6:1a69:4213:44ba/64 scope link flags 800 
       valid_lft forever preferred_lft forever
  • Attempt to ping the endpoing on the other end of the IP e.g. ping
  • Run tcpdump on each endpoint to see what traffic is coming in/out of the ethernet interfaces (eth0). Encrypted VPN traffic will show up as UDP packets on port 51280.
  • Run tcpdump on each endpoint’s wireguard interface (wg0) to see what’s passing over the tunnel itself.

Providing that the wireguard config is correct - keys match up, and allowed IPs are set - then you’re going to be dealing with a routing or firewalling issue somewhere in between.

02 Feb 2018, 21:40

OpenWRT Geofencing

Geofencing refers to allowing or blocking a connection based on country of origin. Unfortunately, OpenWRT / LEDE does not support geofencing out of the box.

The way I’ve worked around this is described below.

Shell script

I created a shell script /root/bin/ipset-update.sh to pull Maxmind Geolite2 database periodically, and use that to populate an ipset set which I can then reference from iptables.

The shell script is based on this one with a few tweaks for Openwrt: change bash to ash and change the paths of some of the utilities ipset, curl etc

The shell script takes 2 arguments: the ipset name to create/update, and the country name to pull out of the Geolite database.

If I call it like this: /root/bin/ipset-update.sh australia Australia then the resulting ipsets are australia4 and australia6 (ipv4 + ipv6 respectively)


The Maxmind database changes over time as, so it’s important to update it on a periodic basis. I installed a cronjob as root user to run the script once per week:

0 0 * * sun     /root/bin/ipset-update.sh australia Australia


From the command line, you can use the new ipset like so:

iptables -A INPUT -p tcp -m tcp --dport 22 -m set --match-set australia4 src -j ACCEPT
ip6tables -A INPUT -p tcp -m tcp --dport 22 -m set --match-set australia6 src -j ACCEPT

I prefer to use the Luci interface for configuring the firewall. While it doesn’t support setting an ipset target directly it does allow you to specify arbitrary iptables switches. When creating a port forwarding or traffic rule that requires geofencing, I put the following in the Extra arguments: section: -m set --match-set australia4 src.

As the interface only allows one ipset to be specified per rule, you can either create multiple rules for multiple countries, or create one ipset that combines multiple countries into a group.

Survive reboot

At boot time rules that use ipsets will fail to load as the ipsets will not exist at that point. To work around that I put the following lines into the /etc/rc.local:

/root/bin/ipset-update.sh australia Australia
/etc/init.d/firewall restart

21 Dec 2017, 21:40

Upload to S3 from URL

I recently had the need to transfer a large file from a remote host to S3 but had insufficient local storage to make a temporary local copy. Fortunately AWS command line tools allows for this by reading the piped output of curl as follows:

curl https://remote-server/file | aws s3 cp - s3://mybucket/file

10 May 2017, 21:40

Building LEDE / Openwrt for x86

EDIT: 2018-03-12, LEDE and Openwrt have merged. References to LEDE here can be substituted for Openwrt.

I had a need to run LEDE on x86 hardware. Building a custom LEDE seemed a bit daunting at first, but turned out to be quite straight forward. The build described here is tailored for Qotom J1900 mini PC.

Building the custom image

I chose to build the LEDE x86_64 image within a Docker container like so:

$ docker pull centos
$ docker run -it centos /bin/bash
<container>$ cd root/
<container>$ yum install wget make gcc openssl which xz perl zlib-static ncurses-devel perl-Thread-Queue.noarch gcc-c++ git file unzip bzip2
<container>$ wget https://downloads.lede-project.org/releases/17.01.1/targets/x86/64/lede-imagebuilder-17.01.1-x86-64.Linux-x86_64.tar.xz
<container>$ tar -xvJf lede-imagebuilder-17.01.1-x86-64.Linux-x86_64.tar.xz
<container>$ cd lede-imagebuilder-17.01.1-x86-64.Linux-x86_64

Build the image. I want USB keyboard support, and don’t need e1000 or realtek drivers

 make image packages="-kmod-e1000e -kmod-e1000 -kmod-r8169 kmod-usb-hid kmod-usb3 kmod-usb2"

The new images are located under ./bin/targets/x86/64 inside the build environment

# ls -l bin/x86/64
total 35852
-rw-r--r--. 1 root root  5587318 May  9 20:36 lede-17.01.1-x86-64-combined-ext4.img.gz
-rw-r--r--. 1 root root 19466174 May  9 20:36 lede-17.01.1-x86-64-combined-squashfs.img
-rw-r--r--. 1 root root  2439806 May  9 20:36 lede-17.01.1-x86-64-generic-rootfs.tar.gz
-rw-r--r--. 1 root root     1968 May  9 20:36 lede-17.01.1-x86-64-generic.manifest
-rw-r--r--. 1 root root  2711691 May  9 20:36 lede-17.01.1-x86-64-rootfs-ext4.img.gz
-rw-r--r--. 1 root root  2164670 May  9 20:36 lede-17.01.1-x86-64-rootfs-squashfs.img
-rw-r--r--. 1  106  111  2620880 Apr 17 17:53 lede-17.01.1-x86-64-vmlinuz
-rw-r--r--. 1 root root      731 May  9 20:36 sha256sums

Just need the combined-ext4 image. Copy that out from the docker container to USB flash drive:

$ docker cp <container id>:/root/lede-imagebuilder-17.01.1-x86-64.Linux-x86_64/bin/targets/x86/64/lede-17.01.1-x86-64-combined-ext4.img.gz /mnt

Installing the custom image

  • boot the mini PC using any Linux rescue disk. (I used SystemRescueCD)
  • insert a second USB flash disk containing the image created above
  • write the image to the mini PC internal drive:
$ mount /dev/sdc1 /mnt; cd /mnt
$ gunzip lede-17.01.1-x86-64-combined-ext4.img.gz
$ dd if=lede-17.01.1-x86-64-combined-ext4.img.gz of=/dev/sda
  • (optionally) resize the LEDE data partition to fill the entire size of the internal storage
    • use fdisk/parted to remove second partition (/dev/sda2)
    • re-add second partition with the same starting block as before, but make the end block the last block on the disk
    • save the new partition table
    • run e2fsck -f /dev/sda2 followed by resize2fs /dev/sda2
  • reboot the device
  • access the console via VGA console, or telnet to IP (no root password!)

06 May 2017, 21:40

Open source router

I recently went through the exercise of setting up a gateway router for one of my customers. The choices I had to make were two-fold, hardware & software


I wanted to try and find the sweet spot between affordability, processing power, reliability. I could pickup an old desktop PC for $0 which would be more than adequate in terms of performance, however I wasn’t confident it would last the distance running 24x7 in a non air-conditioned storage room!

A low power ARM chip on a consumer router (that would support OpenWRT) was my next thought, however these tend to be a little underpowered for what I needed, not to mention very limited in terms of RAM + persistent storage.

I ended up getting a ‘mini pc’ with the following properties:

  • fan-less (heat dissipation via heat sink & aluminium chassis)
  • low power consumption quad-core x86-64 CPU
  • 2 GB RAM, 16GB SSD flash (expandable)
  • 4x 1GB ethernet ports

AUD$250 including delivery from Aliexpress. Something the above lacks which others may want is hardware offload for crypto (AES-NI)


This was a harder choice in a lot of ways - there are so many options!! While the hardware I have is capable of running pretty much any Linux or BSD distro, I decided at the outset that I really needed a purpose built firewall distro that includes a web gui interface. I reviewed the following:


https://www.pfsense.org/ · FreeBSD based

Being possibly the best known open source firewall distro available, I felt obliged to check it out. Certainly very slick, and years of constant refinement certainly shine through.

At the end of the day, I feel a certain unease about the future direction of pfSense. The open-source community does seem to be taking a back seat as the public face becomes more corporate friendly.


https://opnsense.org/ · FreeBSD based

OPNSense is a fork of pfSense and as such is very similar in many ways. Something that really impressed me about the project is the enthusiasm and effort being put in by the core developers. I submitted a couple of bug reports to their Github repo and both were fixed very quickly. The UI is quite different to pfSense as it has been completely reworked, and equally slick and easy to use as pfSense while possibly lacking some of the whistles and bells.

Definitely one to keep an eye on.


http://www.ipfire.org/ · Linux based

I’m afraid I could spare much time for this distro. The web UI is looking very dated. I’m sure it does the job, but without a nicer UI experience, I may aswell just stick to the command line.


https://openwrt.org/ · Linux based

OpenWRT is designed for low end, embedded hardware and what they’ve managed to achieve with such limit hardware resources is astonishing! Sadly x86 support is lacking - the prebuilt image I used didn’t detect all CPU cores or available RAM!? - so was crossed off the list pretty quickly.

If you’re after a distro for your wifi/modem/router device, then OpenWRT fits the bill nicely. A word of warning however, the documentation is atrocious! But hey, I’ll take what I can get.

LEDE Project

https://lede-project.org/ · Linux based

LEDE is a fork of OpenWRT. As such, it’s a younger project which seems to have a more vibrant community than its parent. I had originally passed it over, assuming it would be more or less identical to OpenWRT given how recently it forked. Somebody pointed me back to it citing better x86 support, so I thought I’d give it a spin. I’m glad I did as, this is what I’ve ended up using for my install!


I ended up going with LEDE for these reasons:

  • runs Linux. I’m simply more comfortable with Linux on the command line which gives me more confidence when things go wrong.
  • is an extremely light weight distro out of the box that offers advanced functionality via an easy to use packaging system
  • a gui that strikes a good balance between usability, feature set and simplicity
  • supports my x86 hardware (unlike OpenWRT)

Update December 2017

I’ve been using LEDE for 6 months and overall very happy with it. There are a couple of issues I’ve encountered worth mentioning:

  • I found the firewall configuration confusing where it talks about ‘zone forwardings’ vs iptables ‘forward’ chain. I wrote this Stack Exchange post to clarify (and remind myself how it works!)
  • upgrading between LEDE releases is far from fool-proof. The upgrade process requires you to upgrade the running system in place. Upon reboot, you’re left holding your breath wondering if it’s actually going to boot! Not something I’d ever want to attempt remotely. Better approaches I’ve seen allow you to load the new software version into a secondary partition that you then flag as being the next place to boot from (Ubiquiti works this way).

22 Apr 2017, 21:40

Docker and IPTables on a public host

NOTE: This post applies to Docker < 17.06

By default docker leaves network ports wide open to the world. It is upto you as the sysadmin to lock these down. Ideally you would have a firewall somewhere upstream between your host and the Internet where you can lock down access. However, in a lot of cases you have to do the firewalling on the same host that runs docker. Unfortunately, Docker makes it tricky to create custom iptables rules that take precedence over the allow-all ruleset that Docker introduces. There is a pull request that promises to help in this regard.

Until the fix is available, [EDIT: fixed in 17.06] the way I work around this problem is as follows:

Create a systemd service that runs my custom rules after the Docker service starts/restarts - /etc/systemd/system/docker-firewall.service:

Description=Supplementary Docker Firewall Rules



The file /usr/local/bin/docker-firewall.sh is a shell script which simply inserts IPTables rules at the top of the ‘DOCKER’ chain:

# called by docker-firewall.service
# Work around for controlling access to docker ports until PR#1675 is merged
# https://github.com/docker/libnetwork/pull/1675


$IPTABLES -I $CHAIN -i eth0 -s <trusted IP> -j ACCEPT
$IPTABLES -I $CHAIN -i eth0 -p tcp -m multiport --dport <port list> -j ACCEPT


  • you should modify these rules to suit. Put only those ports that you want open to the world in the <port list>, and any trusted IPs that should have access to all ports in the <trusted IP> field.
  • the rules are specific in reverse order, as they are being inserted at the top of the chain. You could instead specify the insert index (e.g. 0,1,2).
  • make sure the shell script is executable (chmod +x).
  • I’ve chosen to RETURN Internet traffic that doesn’t match the first 2 rules, but you may choose to simply DROP that traffic there. Either way, the last rule must take final action on Internet traffic to ensure that subsequent Docker rules don’t allow it in!!

Once those files have been created, you can enable the service with systemctl enable docker-firewall. Now when you restart Docker (or upon reboot), this service will run afterwards and you’ll see your custom rules appear at the top of the DOCKER chain in iptables.

19 Feb 2017, 21:40

Git over HTTP with Apache

I had a requirement at work recently for our Git repo to be reachable via HTTP. I looked at Gitlab however came to the conclusion that it was probably overkill for our situation. I went with the following setup instead which can be run on the most minimal VM e.g AWS EC2 Nano instance.

The instructions were intended for Centos7 however should work without much modification for other distros.

  • Install required packages:
yum install httpd git mod_ssl
  • Ensure that ‘mod_cgid’ + ‘mod_alias’ are loaded in your Apache config
  • Append the following config to /etc/httpd/conf/httpd.conf to force SSL by redirecting non-SSL to SSL:
<VirtualHost *:80>
	ServerName gitweb.example.com
	Redirect permanent / https://gitweb.example.com/
  • Modify /etc/httpd/conf.d/ssl.conf and add this to the default vhost:
SetEnv GIT_PROJECT_ROOT /data/git
ScriptAlias /git/ /usr/libexec/git-core/git-http-backend/

<LocationMatch "^/git/">
        Options +ExecCGI
        AuthType Basic
        AuthName "git repository"
        AuthUserFile /data/git/htpasswd.git
        Require valid-user
  • Create git repo and give Apache rw permissions:
mkdir /data/git
cd /data/git; git init --bare myrepo
mv myrepo/hooks/post-update.sample myrepo/hooks/post-update
chown apache:apache /data/git -R

File post-update should now contain:

exec git update-server-info
  • Create htpasswd file to protect repo:
htpasswd -c /data/git/htpasswd.git dev
  • Update SELinux to allow Apache rw access to repos:
semanage fcontext -a -t httpd_sys_rw_content_t "/data/git(/.*)?"
restorecon -v /data -R
  • Start Apache:
systemctl start httpd
systemctl enable httpd
  • Push to the repo from your client as follows:
git push https://gitweb.example.com/git/myrepo -f --mirror
  • Pull from repo to your client as follows:
git pull https://dev@gitweb.example.com/git/myrepo master

11 May 2015, 19:18

OpenWRT and IPv6

I just configured my home network to use IPv6. My router runs OpenWRT ‘Barrier Breaker’ which supports IPv6, so it was just a matter of switching on and configuring the functionality.

Unfortunately, my ISP does not provide native IPv6 so I’m using an IPv6 tunnel courtesy of Hurricane Electric Tunnelbroker service.

Configuring my router

The 6in4 tunnel

Hurricane Electric provide a handy auto-generated config snippet specifically for OpenWRT, so it was a simple matter of:

  • installing the 6in4 package - opkg install 6in4
  • updating my /etc/config/network file with the supplied config
  • restarting the network with /etc/init.d/network restart

For reference, my network config looks something like this:

config interface 'wan6'
	option proto 6in4
	option peeraddr  ''
	option ip6addr   '2001:470:aaaa:467::2/64'
	option ip6prefix '2001:470:bbbb:467::/64'
	option tunnelid  '12341234'
	option username  'aaaabbbb'
        option updatekey 'xxxxxxxxxxxx'
	option dns '2001:470:20::2'

LAN interface

The next important step is to decide how you want IP addressing to work on your LAN. IPv6 address assignment can be done in 3 ways:

RA Only

In this mode clients get all their address info using NDP (neighbour discovery protocol). Thanks to RFC6106 RA can also contain DNS resolver information so, if that’s all you need, then a DHCP server may not be required.

RA with DHCPv6 (default mode for OpenWRT)

In this mode clients are get their primary address info via the RA, but are told to try DHCP for additional config.

NOTE: If you use this mode, then you need to ensure you have a working DHCP server aswell. Clients will attempt to solicit a DHCP address, and if the server is not running or not configured correctly then the client won’t configure properly. It seems obvious now, but this did cause me some confusion at first when my client was failing to configure due to my DHCP server being disabled

DHCPv6 only

In this mode clients are told to get all their address config from the DHCP server.

OpenWRT ‘Barrier Breaker’ uses the odhcpd process to manage both RA (router advertisements) and DHCPv6. It takes it’s config from /etc/config/dhcp. By default, my ‘lan’ config looked like this:

config dhcp 'lan'
	option interface 'lan'
	option start '100'
	option limit '150'
	option leasetime '12h'
	option dhcpv6 'server'
	option ra 'server'
	option ra_management '1'

The address assignment mode is specified by the setting: ra_managment:

  • 0: RA only
  • 1: RA with DHCP
  • 2: DHCP only

I have no need for a DHCPv6 server on my LAN so I set option ra_management '0' and disabled the DHCPv6 server with option dhcpv6 'disabled'

Configuring my client

I run Fedora 21 Linux on my desktop which supports IPv6 out of the box. NetworkManager can be configured in ‘Automatic’ or ‘Automatic, DHCP only’ modes. I just had to ensure that it was set to ‘Automatic’ and everything just worked.

Something to keep in mind with Linux clients is that, by default, router advertisements will be ignored on any interface that is forwarding traffic (routing). If you’re running Docker, then this is relevant to you! See this post for more information.

22 Jun 2014, 09:29

CoreOS install to a VPS

I’ve just spun up my first install of CoreOS. I found the process a little confusing at times as the doco isn’t terribly clear in places. CoreOS is a work in progress, so doco will improve I’m sure. In the meantime, hopefully this post will be of some help to others.

The host machine I used was a standard VPS from my hosting provider running on top of KVM. My hosting provider provides a console facility using NoVNC and the ability to attach bootable ISO media.

ISO Boot

Using the supplied ISO from CoreOS, boot the machine. You will end up at a shell prompt, logged in as user core. At this point, you’re simply running the LiveCD and nothing has been installed to disk yet (something the doco does not make clear!)

In my case the network had not yet been configured, so I needed to do that manually as follows:

sudo ifconfig <network port> <ip address> netmask <netmask>
sudo route add default gw <default gateway IP>

Add to /etc/resolv.conf your nameserver IP. I used Google’s e.g. nameserver

Config file

Once network is configured, the next thing to do is grab a config file which will be used each time your new CoreOS installation boots from disk. On another host, reachable via the network, I created the following file named cloud-config.yml:


hostname: myhostname

    addr: $private_ipv4:4001
    peer-addr: $private_ipv4:7001
    - name: etcd.service
      command: start
    - name: fleet.service
      command: start
    - name: static.network
      content: |

  - name: core
      - ssh-rsa AAAA<rest of ssh key goes here>
  - groups:
      - sudo
      - docker

Update the hostname, discovery:, [Network] and ssh-rsa sections to suit yourself.

IMPORTANT: be sure to run your config file through a YAML parser to check for any silly errors. For example, I accidentally left off a - in front of one of the keys which caused the entire config to fail to load!


  1. copy the config file to the CoreOS host e.g. wget http://externalhost/cloud-config.yml
  2. now install CoreOS to the local disk with the following command:
coreos-install -d /dev/vda -c cloud-config.yml

Replace /dev/vda with your device name and cloud-config.yml with your config file name. The install only takes about 30 seconds. Once finished, unmount the ISO media and reboot your machine.

Once booted you’ll arrive at a login prompt. If your config was loaded successfully, you should see the IP address and hostname (you specified in the config) listed just above the login prompt. You should also be able to SSH in (using the SSH key supplied in the config) e.g. ssh core@x.x.x.x


By default, CoreOS boots up with a completely open firewall policy. In most cases this is fine as your host’s management interface would be isolated from the wider network. In my case, using a public VPS, I needed to configure some basic iptables rules.

This was done by adding the following additional unit to cloud-config.yml:

- name: iptables.service
      command: start
      content: |
        Description=Packet Filtering Framework
        ExecStart=/usr/sbin/iptables-restore /etc/iptables.rules ; /usr/sbin/ip6tables-restore /etc/ip6tables.rules
        ExecReload=/usr/sbin/iptables-restore /etc/iptables.rules ; /usr/sbin/ip6tables-restore /etc/ip6tables.rules
        ExecStop=/usr/sbin/iptables --flush;/usr/sbin/ip6tables --flush

I then created files /etc/iptables.rules and /etc/ip6tables.rules containing appropriate rulesets. These are applied every time the host boots.

(Thanks to this Github gist for the idea)


If, for some reason, your config doesn’t load:

  1. reboot using the ISO media
  2. mount the ninth partition on the disk e.g. sudo mount /dev/vda9 /mnt. (to view all partitions on the disk you can use sudo parted /dev/vda print)
  3. use journalctl to view the boot messages, looking for any errors associated with the config file created earlier e.g. journalctl -D /mnt/var/log/journal | grep cloud
  4. Edit the file /mnt/var/lib/coreos-install/user_data and make any modifications required
  5. Unmount ISO media and reboot