04 Jul 2022, 00:18

NetworkManager and DHCPv6

tl;dr NetworkManager method=dhcp is deprecated and should not be used.

I recently had an issue with a host on my network not getting a IPv6 default route when using DHCP. The host is running Fedora 36 which uses NetworkManager to manage network connections. I wanted the host to have a static IP, but configured via a DHCP static lease rather than manually. I ended up configuring the network connection with option method=dhcp after finding a post online with example config. But the result was no IPv6 network connectivity for this host due to missing default route.

After quite a bit of messing about trying to work out why, I stumbled on this post which explains the problem. At some point method=dhcp was deprecated, and in fact is no longer mentioned in the reference doco.

The correct way to achieve what I wanted to do is to set method=auto on the client. Then on the router, ensure that IPv6 config on the interface connected to the host (LAN) is set so the M flag is sent in router advertisements. This tells the client that a DHCP server is managing IP addresses and to initiate a DHCP solicitation. The end results is the client will have two IPv6 addresses - one via SLAAC, and another via DHCP.

20 Jan 2022, 09:23

TrueNAS and NFS

I recently installed TrueNAS and attempted to setup NFS. Here are some notes that may be of use to others or my future self 😄

My use case is to mount the NFS share on demand, backup my home directory, then immediately unmount.


Under Services -> NFS:

  1. Enable NFSv4. Because, why not? NFSv4 is better.
  2. Enable ‘NFSv3 ownership model for NFSv4’. This permits UID/GID to be preserved without needing ID mapper running on client side.

Under Sharing -> Unix Shares:

  1. Create a dataset for the NFS share. Be sure to set a quota to something sensible.
  2. Create the NFS share with defaults


  • Create a TrueNAS user with the same UID as the user that will be using the NFS share
  • Drop into command shell on Truenas and navigate to the new NFS share root. Create a directory under the root and chown it to the user that will use it e.g.
cd /mnt/Seagate4TB/nfsshare
mkdir home-backup
chown ian home-backup

NFS client

  • add an entry to /etc/fstab e.g.
truenas.lan:/mnt/Seagate4TB/nfsshare /mnt	nfs4	noauto,defaults

where /mnt/Seagate4TB/nfsshare/ is the NFS root.

  • add entry to sudoers to enable non-root user to mount the NFS mountpoint:
ian	ALL=(ALL)	NOPASSWD: /usr/bin/mount /mnt, /usr/bin/umount /mnt

Now user ‘ian’ should be able to write into /mnt/home-backup.

07 Oct 2019, 09:23

Connecting to Internode with OpenWRT

Australian ISP Internode does not officially support OpenWRT.

I recently connected HFC (hybrid fibre coaxial) with NBN & Internode. To get it working I needed to tag VLAN 2 on the WAN interface. Using the Luci web interface, navigate to Networks -> Switch and set the dropdown to Tagged next to VLAN 2 under WAN. Then setup the WAN interface like a standard PPPoE with <username>@internode.on.net for the username and my account password.

Update 12th Feb 2021: if you’re not using Luci web interface or the option is missing, you can achieve the same by editing /etc/config/network and setting the ‘wan’ interface ifname option to be e.g. eth1.2 where eth1 corresponds with the WAN physical interface and .2 signifies the VLAN. Then reload the config with service network reload.

02 Jul 2017, 14:17

Be your own tunnel broker: 6in4

The article describes how to configure a 6in4 service using your own VPS host. Tunnelling is done using protocol 41 which encapsulates IPv6 inside IPv4.

Unfortunately my broadband provider does not offer IPv6. To work around that I tunnel to my VPS host over IPv4 and use IPv6 that way. I could use a tunnel broker such as Hurricane Electric, however their closest endpoint is far enough away that the additional latency makes it a pretty unattractive option. My VPS provider is close enough that latency over the tunnel is actually not much different to native IPv4!

Tunnel Configuration

For this example, the VPS host public IP is x.x.x.x and the home broadband public IP is y.y.y.y

VPS host

My VPS has allocated a /56 prefix to my host - aaaa:bbbb:cccc:5b00::/56. From that I’m going to sub allocate aaaa:bbbb:cccc:5b10::/60 to the tunnel, as follows:

  • aaaa:bbbb:cccc:5b10::a/127, aaaa:bbbb:cccc:5b10::b/127 - each end of the tunnel
  • aaaa:bbbb:cccc:5b11::/64- subnet for use on the home network
# Create sit interface 'sittun'
ip tunnel add sittun mode sit local x.x.x.x remote y.y.y.y ttl 64 dev eth0
# Allocate an IPv6 address to the local end (remote end will be ::b)
ip addr add dev sittun aaaa:bbbb:cccc:5b10::a/127
# Route a /64 prefix down the tunnel for use on the home network
ip -6 route add aaaa:bbbb:cccc:5b11::/64 via aaaa:bbbb:cccc:5b10::b
# Bring the interface up
ip link set dev sittun up

Home router

ip tunnel add sittun mode sit local y.y.y.y remote x.x.x.x ttl 64 dev enp1s0
ip adddr add dev sittun aaaa:bbbb:cccc:5b10::b/127
# VPS host IP is the default route for all IPv6 traffic
ip -6 route add default via aaaa:bbbb:cccc:5b10::a
ip link set dev sittun up

If the router does not have a public IP (behind a NAT device), then it is necessary to specify the private IP for the local end rather than the public IP e.g. ip tunnel add sittun mode sit local remote x.x.x.x ttl 64 dev enp1s0 The NAT device will then need to forward 6in4 traffic to

Firewalling / Routing

VPS Host

The VPS host needs to have routing enabled for IPv6:

sysctl -w net.ipv6.conf.all.forwarding=1
sysctl -w net.ipv6.conf.eth0.accept_ra=2

The second command is required if eth0 has a SLAAC assigned IP (most likely).

The VPS host needs to allow protocol 41 packets from the client IP. The following iptables command will do:

iptables -I INPUT -p 41 -s y.y.y.y -j ACCEPT

The following rules are required in the ip6tables FORWARD chain to permit connectivity between the home network and the Internet:

ip6tables -I FORWARD -i sittun -j ACCEPT
ip6tables -I FORWARD -o sittun -j ACCEPT

Home router

We need v6 ip forwarding:

sysctl -w net.ipv6.conf.all.forwarding=1

Allow protocol 41 from our VPS host:

iptables -I INPUT -p 41 -s x.x.x.x -j ACCEPT

The home network needs some basic firewall rules to protect it from unrestricted access from the IPv6 Internet. The following is a suggested minimal ruleset:

# Allow return traffic from the internet
iptables -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT
# ICMPv6 is required for IPv6 to operate properly
iptables -A FORWARD -p ipv6-icmp -j ACCEPT
# Allow all from your LAN interface
iptables -A FORWARD -i <lan interface> -j ACCEPT
# Reject all else
iptables -A FORWARD -j REJECT --reject-with icmp6-adm-prohibited

11 May 2015, 19:18

OpenWRT and IPv6

I just configured my home network to use IPv6. My router runs OpenWRT ‘Barrier Breaker’ which supports IPv6, so it was just a matter of switching on and configuring the functionality.

Unfortunately, my ISP does not provide native IPv6 so I’m using an IPv6 tunnel courtesy of Hurricane Electric Tunnelbroker service.

Configuring my router

The 6in4 tunnel

Hurricane Electric provide a handy auto-generated config snippet specifically for OpenWRT, so it was a simple matter of:

  • installing the 6in4 package - opkg install 6in4
  • updating my /etc/config/network file with the supplied config
  • restarting the network with /etc/init.d/network restart

For reference, my network config looks something like this:

config interface 'wan6'
	option proto 6in4
	option peeraddr  ''
	option ip6addr   '2001:470:aaaa:467::2/64'
	option ip6prefix '2001:470:bbbb:467::/64'
	option tunnelid  '12341234'
	option username  'aaaabbbb'
        option updatekey 'xxxxxxxxxxxx'
	option dns '2001:470:20::2'

LAN interface

The next important step is to decide how you want IP addressing to work on your LAN. IPv6 address assignment can be done in 3 ways:

RA Only

In this mode clients get all their address info using NDP (neighbour discovery protocol). Thanks to RFC6106 RA can also contain DNS resolver information so, if that’s all you need, then a DHCP server may not be required.

RA with DHCPv6 (default mode for OpenWRT)

In this mode clients are get their primary address info via the RA, but are told to try DHCP for additional config.

NOTE: If you use this mode, then you need to ensure you have a working DHCP server aswell. Clients will attempt to solicit a DHCP address, and if the server is not running or not configured correctly then the client won’t configure properly. It seems obvious now, but this did cause me some confusion at first when my client was failing to configure due to my DHCP server being disabled

DHCPv6 only

In this mode clients are told to get all their address config from the DHCP server.

OpenWRT ‘Barrier Breaker’ uses the odhcpd process to manage both RA (router advertisements) and DHCPv6. It takes it’s config from /etc/config/dhcp. By default, my ’lan’ config looked like this:

config dhcp 'lan'
	option interface 'lan'
	option start '100'
	option limit '150'
	option leasetime '12h'
	option dhcpv6 'server'
	option ra 'server'
	option ra_management '1'

The address assignment mode is specified by the setting: ra_managment:

  • 0: RA only
  • 1: RA with DHCP
  • 2: DHCP only

I have no need for a DHCPv6 server on my LAN so I set option ra_management '0' and disabled the DHCPv6 server with option dhcpv6 'disabled'

Configuring my client

I run Fedora 21 Linux on my desktop which supports IPv6 out of the box. NetworkManager can be configured in ‘Automatic’ or ‘Automatic, DHCP only’ modes. I just had to ensure that it was set to ‘Automatic’ and everything just worked.

Something to keep in mind with Linux clients is that, by default, router advertisements will be ignored on any interface that is forwarding traffic (routing). If you’re running Docker, then this is relevant to you! See this post for more information.

11 Aug 2014, 05:28

AWS: Custom Centos Image

I recently had a need to deploy a t2.micro instance on EC2 running Centos. Unfortunately, there are no official Centos AMIs available that will run on the newer HVM instance types.

The AWS marketplace has several 3rd party Centos AMIs that support HVM. I used one of these as a basis for the new install.

Centos has the ability to boot up into a VNC server from which a network install can be done. Using this facility, I was able to create the custom install as follows:

  1. Launch HVM Centos instance (using AMI from marketplace)
  2. Insert into grub a new entry which will boot the VNC server
  3. Install new grub and reboot
  4. Connect via VNC to TCP port 5901 and proceed with normal Centos install
  5. ssh to new install and comment out the mac address HWADDR= from /etc/sysconfig/network-interfaces/ifcfg-eth0
  6. Create an image from the instance

The grub entry I used is as follows:

title Centos Install (PXE)
root (hd0,0)
kernel /boot/vmlinuz.cent.pxe vnc vncpassword=xxxxxxxxxxxx headless ip=dhcp ksdevice=eth0 method=http://mirror.centos.org/centos-6/6.5/os/x86_64/ lang=en_US keymap=us xen_blkfront.sda_is_xvda=1
initrd /boot/initrd.img.cent.pxe

This needs to go before your existing boot entry. Replace vncpassword value with your own. Note the xen_blkfront.sda_is_xvda=1 - this is required so the Centos installer can map the correct device name for your block device.

To apply the new config, run the following commands:

# grub
grub> device (hd0) /dev/xvda
grub> root (hd0,0)
grub> setup (hd0)
grub> quit

Thanks to this post on Sentris.com for the PXE boot idea.

22 Jun 2014, 09:29

CoreOS install to a VPS

I’ve just spun up my first install of CoreOS. I found the process a little confusing at times as the doco isn’t terribly clear in places. CoreOS is a work in progress, so doco will improve I’m sure. In the meantime, hopefully this post will be of some help to others.

The host machine I used was a standard VPS from my hosting provider running on top of KVM. My hosting provider provides a console facility using NoVNC and the ability to attach bootable ISO media.

ISO Boot

Using the supplied ISO from CoreOS, boot the machine. You will end up at a shell prompt, logged in as user core. At this point, you’re simply running the LiveCD and nothing has been installed to disk yet (something the doco does not make clear!)

In my case the network had not yet been configured, so I needed to do that manually as follows:

sudo ifconfig <network port> <ip address> netmask <netmask>
sudo route add default gw <default gateway IP>

Add to /etc/resolv.conf your nameserver IP. I used Google’s e.g. nameserver

Config file

Once network is configured, the next thing to do is grab a config file which will be used each time your new CoreOS installation boots from disk. On another host, reachable via the network, I created the following file named cloud-config.yml:


hostname: myhostname

    addr: $private_ipv4:4001
    peer-addr: $private_ipv4:7001
    - name: etcd.service
      command: start
    - name: fleet.service
      command: start
    - name: static.network
      content: |

  - name: core
      - ssh-rsa AAAA<rest of ssh key goes here>
  - groups:
      - sudo
      - docker

Update the hostname, discovery:, [Network] and ssh-rsa sections to suit yourself.

IMPORTANT: be sure to run your config file through a YAML parser to check for any silly errors. For example, I accidentally left off a - in front of one of the keys which caused the entire config to fail to load!


  1. copy the config file to the CoreOS host e.g. wget http://externalhost/cloud-config.yml
  2. now install CoreOS to the local disk with the following command:
coreos-install -d /dev/vda -c cloud-config.yml

Replace /dev/vda with your device name and cloud-config.yml with your config file name. The install only takes about 30 seconds. Once finished, unmount the ISO media and reboot your machine.

Once booted you’ll arrive at a login prompt. If your config was loaded successfully, you should see the IP address and hostname (you specified in the config) listed just above the login prompt. You should also be able to SSH in (using the SSH key supplied in the config) e.g. ssh core@x.x.x.x


By default, CoreOS boots up with a completely open firewall policy. In most cases this is fine as your host’s management interface would be isolated from the wider network. In my case, using a public VPS, I needed to configure some basic iptables rules.

This was done by adding the following additional unit to cloud-config.yml:

- name: iptables.service
      command: start
      content: |
        Description=Packet Filtering Framework
        ExecStart=/usr/sbin/iptables-restore /etc/iptables.rules ; /usr/sbin/ip6tables-restore /etc/ip6tables.rules
        ExecReload=/usr/sbin/iptables-restore /etc/iptables.rules ; /usr/sbin/ip6tables-restore /etc/ip6tables.rules
        ExecStop=/usr/sbin/iptables --flush;/usr/sbin/ip6tables --flush

I then created files /etc/iptables.rules and /etc/ip6tables.rules containing appropriate rulesets. These are applied every time the host boots.

(Thanks to this Github gist for the idea)


If, for some reason, your config doesn’t load:

  1. reboot using the ISO media
  2. mount the ninth partition on the disk e.g. sudo mount /dev/vda9 /mnt. (to view all partitions on the disk you can use sudo parted /dev/vda print)
  3. use journalctl to view the boot messages, looking for any errors associated with the config file created earlier e.g. journalctl -D /mnt/var/log/journal | grep cloud
  4. Edit the file /mnt/var/lib/coreos-install/user_data and make any modifications required
  5. Unmount ISO media and reboot