I recently installed TrueNAS and attempted to setup NFS. Here are some tips for others looking to do the same.
- Enable NFSv4. Because, why not? NFSv4 is better.
I recently installed TrueNAS and attempted to setup NFS. Here are some tips for others looking to do the same.
Australian ISP Internode does not officially support OpenWRT.
I recently connected HFC (hybrid fibre coaxial) with NBN & Internode. To get it working I needed to tag VLAN 2 on the WAN interface. Using the Luci web interface, navigate to Networks -> Switch and set the dropdown to
Tagged next to VLAN 2 under
WAN. Then setup the WAN interface like a standard PPPoE with
<username>@internode.on.net for the username and my account password.
Update 12th Feb 2021: if you’re not using Luci web interface or the option is missing, you can achieve the same by editing
setting the ‘wan’ interface
ifname option to be e.g.
corresponds with the WAN physical interface and
.2 signifies the VLAN. Then
reload the config with
service network reload.
The article describes how to configure a 6in4 service using your own VPS host. Tunnelling is done using protocol 41 which encapsulates IPv6 inside IPv4.
Unfortunately my broadband provider does not offer IPv6. To work around that I tunnel to my VPS host over IPv4 and use IPv6 that way. I could use a tunnel broker such as Hurricane Electric, however their closest endpoint is far enough away that the additional latency makes it a pretty unattractive option. My VPS provider is close enough that latency over the tunnel is actually not much different to native IPv4!
For this example, the VPS host public IP is
x.x.x.x and the home broadband public IP is
My VPS has allocated a /56 prefix to my host -
aaaa:bbbb:cccc:5b00::/56. From that I’m going to sub allocate
aaaa:bbbb:cccc:5b10::/60 to the tunnel, as follows:
aaaa:bbbb:cccc:5b10::b/127- each end of the tunnel
aaaa:bbbb:cccc:5b11::/64- subnet for use on the home network
# Create sit interface 'sittun' ip tunnel add sittun mode sit local x.x.x.x remote y.y.y.y ttl 64 dev eth0 # Allocate an IPv6 address to the local end (remote end will be ::b) ip addr add dev sittun aaaa:bbbb:cccc:5b10::a/127 # Route a /64 prefix down the tunnel for use on the home network ip -6 route add aaaa:bbbb:cccc:5b11::/64 via aaaa:bbbb:cccc:5b10::b # Bring the interface up ip link set dev sittun up
ip tunnel add sittun mode sit local y.y.y.y remote x.x.x.x ttl 64 dev enp1s0 ip adddr add dev sittun aaaa:bbbb:cccc:5b10::b/127 # VPS host IP is the default route for all IPv6 traffic ip -6 route add default via aaaa:bbbb:cccc:5b10::a ip link set dev sittun up
If the router does not have a public IP (behind a NAT device), then it is necessary to specify the private IP for the local end rather than the public IP e.g.
ip tunnel add sittun mode sit local 192.168.0.8 remote x.x.x.x ttl 64 dev enp1s0The NAT device will then need to forward 6in4 traffic to
The VPS host needs to have routing enabled for IPv6:
sysctl -w net.ipv6.conf.all.forwarding=1 sysctl -w net.ipv6.conf.eth0.accept_ra=2
The second command is required if eth0 has a SLAAC assigned IP (most likely).
The VPS host needs to allow protocol 41 packets from the client IP. The following iptables command will do:
iptables -I INPUT -p 41 -s y.y.y.y -j ACCEPT
The following rules are required in the ip6tables
FORWARD chain to permit connectivity between the home network and the Internet:
ip6tables -I FORWARD -i sittun -j ACCEPT ip6tables -I FORWARD -o sittun -j ACCEPT
We need v6 ip forwarding:
sysctl -w net.ipv6.conf.all.forwarding=1
Allow protocol 41 from our VPS host:
iptables -I INPUT -p 41 -s x.x.x.x -j ACCEPT
The home network needs some basic firewall rules to protect it from unrestricted access from the IPv6 Internet. The following is a suggested minimal ruleset:
# Allow return traffic from the internet iptables -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT # ICMPv6 is required for IPv6 to operate properly iptables -A FORWARD -p ipv6-icmp -j ACCEPT # Allow all from your LAN interface iptables -A FORWARD -i <lan interface> -j ACCEPT # Reject all else iptables -A FORWARD -j REJECT --reject-with icmp6-adm-prohibited
I just configured my home network to use IPv6. My router runs OpenWRT ‘Barrier Breaker’ which supports IPv6, so it was just a matter of switching on and configuring the functionality.
Unfortunately, my ISP does not provide native IPv6 so I’m using an IPv6 tunnel courtesy of Hurricane Electric Tunnelbroker service.
Hurricane Electric provide a handy auto-generated config snippet specifically for OpenWRT, so it was a simple matter of:
opkg install 6in4
/etc/config/networkfile with the supplied config
For reference, my network config looks something like this:
config interface 'wan6' option proto 6in4 option peeraddr '220.127.116.11' option ip6addr '2001:470:aaaa:467::2/64' option ip6prefix '2001:470:bbbb:467::/64' option tunnelid '12341234' option username 'aaaabbbb' option updatekey 'xxxxxxxxxxxx' option dns '2001:470:20::2'
The next important step is to decide how you want IP addressing to work on your LAN. IPv6 address assignment can be done in 3 ways:
In this mode clients get all their address info using NDP (neighbour discovery protocol). Thanks to RFC6106 RA can also contain DNS resolver information so, if that’s all you need, then a DHCP server may not be required.
In this mode clients are get their primary address info via the RA, but are told to try DHCP for additional config.
NOTE: If you use this mode, then you need to ensure you have a working DHCP server aswell. Clients will attempt to solicit a DHCP address, and if the server is not running or not configured correctly then the client won’t configure properly. It seems obvious now, but this did cause me some confusion at first when my client was failing to configure due to my DHCP server being disabled
In this mode clients are told to get all their address config from the DHCP server.
OpenWRT ‘Barrier Breaker’ uses the
odhcpd process to manage both RA (router advertisements) and DHCPv6. It takes it’s config from
/etc/config/dhcp. By default, my ‘lan’ config looked like this:
config dhcp 'lan' option interface 'lan' option start '100' option limit '150' option leasetime '12h' option dhcpv6 'server' option ra 'server' option ra_management '1'
The address assignment mode is specified by the setting:
I have no need for a DHCPv6 server on my LAN so I set
option ra_management '0' and disabled the DHCPv6 server with
option dhcpv6 'disabled'
I run Fedora 21 Linux on my desktop which supports IPv6 out of the box. NetworkManager can be configured in ‘Automatic’ or ‘Automatic, DHCP only’ modes. I just had to ensure that it was set to ‘Automatic’ and everything just worked.
Something to keep in mind with Linux clients is that, by default, router advertisements will be ignored on any interface that is forwarding traffic (routing). If you’re running Docker, then this is relevant to you! See this post for more information.
I recently had a need to deploy a t2.micro instance on EC2 running Centos. Unfortunately, there are no official Centos AMIs available that will run on the newer HVM instance types.
The AWS marketplace has several 3rd party Centos AMIs that support HVM. I used one of these as a basis for the new install.
Centos has the ability to boot up into a VNC server from which a network install can be done. Using this facility, I was able to create the custom install as follows:
5901and proceed with normal Centos install
The grub entry I used is as follows:
title Centos Install (PXE) root (hd0,0) kernel /boot/vmlinuz.cent.pxe vnc vncpassword=xxxxxxxxxxxx headless ip=dhcp ksdevice=eth0 method=http://mirror.centos.org/centos-6/6.5/os/x86_64/ lang=en_US keymap=us xen_blkfront.sda_is_xvda=1 initrd /boot/initrd.img.cent.pxe
This needs to go before your existing boot entry. Replace
vncpassword value with your own. Note the
xen_blkfront.sda_is_xvda=1 - this is required so the Centos installer can map the correct device name for your block device.
To apply the new config, run the following commands:
# grub grub> device (hd0) /dev/xvda grub> root (hd0,0) grub> setup (hd0) grub> quit
Thanks to this post on Sentris.com for the PXE boot idea.
I’ve just spun up my first install of CoreOS. I found the process a little confusing at times as the doco isn’t terribly clear in places. CoreOS is a work in progress, so doco will improve I’m sure. In the meantime, hopefully this post will be of some help to others.
Using the supplied ISO from CoreOS, boot the machine. You will end up at a shell prompt, logged in as user
core. At this point, you’re simply running the LiveCD and nothing has been installed to disk yet (something the doco does not make clear!)
In my case the network had not yet been configured, so I needed to do that manually as follows:
sudo ifconfig <network port> <ip address> netmask <netmask> sudo route add default gw <default gateway IP>
/etc/resolv.conf your nameserver IP. I used Google’s e.g.
Once network is configured, the next thing to do is grab a config file which will be used each time your new CoreOS installation boots from disk. On another host, reachable via the network, I created the following file named
#cloud-config hostname: myhostname coreos: etcd: addr: $private_ipv4:4001 peer-addr: $private_ipv4:7001 units: - name: etcd.service command: start - name: fleet.service command: start - name: static.network content: | [Match] Name=ens3 [Network] Address=x.x.x.109/24 Gateway=x.x.x.1 DNS=x.x.x.10 DNS=x.x.x.11 DNS=18.104.22.168 users: - name: core ssh-authorized-keys: - ssh-rsa AAAA<rest of ssh key goes here> - groups: - sudo - docker
ssh-rsa sections to suit yourself.
IMPORTANT: be sure to run your config file through a YAML parser to check for any silly errors. For example, I accidentally left off a
- in front of one of the keys which caused the entire config to fail to load!
coreos-install -d /dev/vda -c cloud-config.yml
/dev/vda with your device name and
cloud-config.yml with your config file name. The install only takes about 30 seconds. Once finished, unmount the ISO media and reboot your machine.
Once booted you’ll arrive at a login prompt. If your config was loaded successfully, you should see the IP address and hostname (you specified in the config) listed just above the login prompt. You should also be able to SSH in (using the SSH key supplied in the config) e.g.
By default, CoreOS boots up with a completely open firewall policy. In most cases this is fine as your host’s management interface would be isolated from the wider network. In my case, using a public VPS, I needed to configure some basic iptables rules.
This was done by adding the following additional unit to
- name: iptables.service command: start content: | [Unit] Description=Packet Filtering Framework DefaultDependencies=no After=systemd-sysctl.service Before=sysinit.target [Service] Type=oneshot ExecStart=/usr/sbin/iptables-restore /etc/iptables.rules ; /usr/sbin/ip6tables-restore /etc/ip6tables.rules ExecReload=/usr/sbin/iptables-restore /etc/iptables.rules ; /usr/sbin/ip6tables-restore /etc/ip6tables.rules ExecStop=/usr/sbin/iptables --flush;/usr/sbin/ip6tables --flush RemainAfterExit=yes [Install] WantedBy=multi-user.target
I then created files
/etc/ip6tables.rules containing appropriate rulesets. These are applied every time the host boots.
(Thanks to this Github gist for the idea)
If, for some reason, your config doesn’t load:
sudo mount /dev/vda9 /mnt. (to view all partitions on the disk you can use
sudo parted /dev/vda print)
journalctlto view the boot messages, looking for any errors associated with the config file created earlier e.g.
journalctl -D /mnt/var/log/journal | grep cloud
/mnt/var/lib/coreos-install/user_dataand make any modifications required