22 Nov 2017, 21:40

Mac Mini + Centos7

I recently had a need to install Linux on a 2014 Mac Mini. Naturally I chose Centos! 馃槃 I had some trouble finding a straightforward HOWTO to follow, so for the benefit of others wanting to do the same thing, here are the steps I took:

  • download the latest Centos 7 minimal ISO, and transfer that to a USB stick using dd e.g.
sudo dd if=CentOS-7-x86_64-Minimal-1708.iso of=/dev/sdb bs=8M
  • insert the USB stick into the Mac Mini
  • (re)boot the Mac Mini and hold the C key (or Alt/Option) down immediately after power on - this will take you to a boot disk selection screen
  • select the USB stick (labelled ‘efi’) and proceed to boot from it
  • from here is a standard Centos install routine, with the exception of disk partitions:
  • peform a manual disk partition setup.
  • You should see 3 existing logical partitions: the EFI partion, the partition holding the existing MacOS install, and a recovery partition of around 600MB.
  • wipe the MacOS partition and add your Centos mountpoints there as required. (keep the MacOS recovery partition in case you want to revert back in future)
  • ensure that the existing EFI partition gets mounted to /boot/efi
  • proceed with Centos install

Something odd I noticed was that the onboard NIC did not show any link light when plugged in, however the network was connected (after I ran dhclient to get an IP).

02 Jul 2017, 14:17

Be your own tunnel broker: 6in4

The article describes how to configure a 6in4 service using your own VPS host. Tunnelling is done using protocol 41 which encapsulates IPv6 inside IPv4.

Unfortunately my broadband provider does not offer IPv6. To work around that I tunnel to my VPS host over IPv4 and use IPv6 that way. I could use a tunnel broker such as Hurricane Electric, however their closest endpoint is far enough away that the additional latency makes it a pretty unattractive option. My VPS provider is close enough that latency over the tunnel is actually not much different to native IPv4!

Tunnel Configuration

For this example, the VPS host public IP is x.x.x.x and the home broadband public IP is y.y.y.y

VPS host

My VPS has allocated a /56 prefix to my host - aaaa:bbbb:cccc:5b00::/56. From that I’m going to sub allocate aaaa:bbbb:cccc:5b10::/60 to the tunnel, as follows:

  • aaaa:bbbb:cccc:5b10::a/127, aaaa:bbbb:cccc:5b10::b/127 - each end of the tunnel
  • aaaa:bbbb:cccc:5b11::/64- subnet for use on the home network
# Create sit interface 'sittun'
ip tunnel add sittun mode sit local x.x.x.x remote y.y.y.y ttl 64 dev eth0
# Allocate an IPv6 address to the local end (remote end will be ::b)
ip addr add dev sittun aaaa:bbbb:cccc:5b10::a/127
# Route a /64 prefix down the tunnel for use on the home network
ip -6 route add aaaa:bbbb:cccc:5b11::/64 via aaaa:bbbb:cccc:5b10::b
# Bring the interface up
ip link set dev sittun up

Home router

ip tunnel add sittun mode sit local y.y.y.y remote x.x.x.x ttl 64 dev enp1s0
ip adddr add dev sittun aaaa:bbbb:cccc:5b10::b/127
# VPS host IP is the default route for all IPv6 traffic
ip -6 route add default via aaaa:bbbb:cccc:5b10::a
ip link set dev sittun up

If the router does not have a public IP (behind a NAT device), then it is necessary to specify the private IP for the local end rather than the public IP e.g. ip tunnel add sittun mode sit local 192.168.0.8 remote x.x.x.x ttl 64 dev enp1s0 The NAT device will then need to forward 6in4 traffic to 192.168.0.8.

Firewalling / Routing

VPS Host

The VPS host needs to have routing enabled for IPv6:

sysctl -w net.ipv6.conf.all.forwarding=1
sysctl -w net.ipv6.conf.eth0.accept_ra=2

The second command is required if eth0 has a SLAAC assigned IP (most likely).

The VPS host needs to allow protocol 41 packets from the client IP. The following iptables command will do:

iptables -I INPUT -p 41 -s y.y.y.y -j ACCEPT

The following rules are required in the ip6tables FORWARD chain to permit connectivity between the home network and the Internet:

ip6tables -I FORWARD -i sittun -j ACCEPT
ip6tables -I FORWARD -o sittun -j ACCEPT

Home router

We need v6 ip forwarding:

sysctl -w net.ipv6.conf.all.forwarding=1

Allow protocol 41 from our VPS host:

iptables -I INPUT -p 41 -s x.x.x.x -j ACCEPT

The home network needs some basic firewall rules to protect it from unrestricted access from the IPv6 Internet. The following is a suggested minimal ruleset:

# Allow return traffic from the internet
iptables -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT
# ICMPv6 is required for IPv6 to operate properly
iptables -A FORWARD -p ipv6-icmp -j ACCEPT
# Allow all from your LAN interface
iptables -A FORWARD -i <lan interface> -j ACCEPT
# Reject all else
iptables -A FORWARD -j REJECT --reject-with icmp6-adm-prohibited

25 Jun 2017, 13:38

HAProxy: rate limits + TLS SNI

At work we have been using AWS elastic load balancers for some time now and found it simple to use and reliable. Unfortunately, the tradeoff for that simplicity is lack of features & control. The main issue we’re facing is the need to implement basic rate-limiting controls on our web frontend to reduce the impact of abuse. I’m actually a little surprised that AWS do not offer some basic ratelimit functionality in ELB (maybe it’s coming?). The other annoyance is having to provision a separate ELB instance for each of our SSL certificates due to lack of SNI support.

So I’m investigating the possibility of replacing our multiple ELB instances with a pair of HA proxy instances running on ec2. This post is a place to dump notes & thoughts on the process.

Goals

What I’m aiming for will have the following properties:

  • high availability - both in terms of frontend (client facing) and backend (application server facing)
  • frontend: auto-failover between public IP addresses assigned to HAproxy instances
  • backend: auto-failover between application servers being proxied to
  • rate limiting capability - layer 4 minimum, ideally upto layer 7
  • TLS SNI - allow multiple SSL certificates to be served from the same IP addresss

Frontend high-availability

The plan is to use AWS route53 DNS health checks to provide a DNS based failover mechanism. This can be done one of two ways:

  • active/standby configuration: route53 will return the primary IP (if healthy), otherwise will return the secondary IP
  • multi-value configuration: route53 will return whichever IPs are considered healthy

Ratelimiting

If all we needed was basic rate limiting based on client IP address, then the simplest solution would be a firewall out in front running iptables and its built in rate limiting functionality. This is not suitable for our needs as we need a) more intelligent rate limiting capability and b) the ability to maintain rate limit state between multiple frontend peers. HAProxy provides a solution to both of these needs.

On a), HAPproxy allows a rate limit to be applied to almost any aspect of a TCP or HTTP transaction. On b), sharing of rate limit counters between HAProxy peers was added in HAProxy 1.6 with the caveat ‘must be used for safe reload and server failover only’. For a pair of HAProxy nodes in a low traffic scenario, I’m betting this will be ‘good enough’ for my HA needs.

Config

The following is relevant parts of haproxy.cfg. This isn’t supposed to be any kind of ‘production’ config, just used for testing purposes. The config was built using the following resources:

peers hapeers
    peer haproxy1 192.168.1.1:1024
    peer haproxy2 192.168.2.1:1024

frontend https
    # *.pem files read from directory '/etc/haproxy/ssl'. 
    # Certificate will be matched against SNI, otherwise first certificate will be used
    bind *:443 ssl crt /etc/haproxy/ssl/
    default_backend bk_one

    tcp-request inspect-delay 5s

    stick-table type ip size 200k expire 30s peers hapeers store gpc0

    # backends increments frontend gpc0 (sc0_inc_gpc0) on abuse, which we're checking here
    acl source_is_abuser src_get_gpc0 gt 0

    # don't track abuser while it's getting redirected to rate-limit
    tcp-request connection track-sc0 src if !source_is_abuser
    tcp-request content accept if { req_ssl_hello_type 1 }
   
    # tell backend client is using https
    http-request set-header X-Forwarded-Proto https if { ssl_fc }

    # redirect abuser to rate-limit backend until their entry expires (30s above)
    use_backend rate-limit if source_is_abuser

    use_backend bk_one if { ssl_fc_sni -i demo1.example.com }
    use_backend bk_two if { ssl_fc_sni -i demo2.example.com }


# mostly the same as 'https' frontend, minus SSL bits
frontend http
    bind *:80
    default_backend             bk_one

    tcp-request inspect-delay 5s

    stick-table type ip size 200k expire 30s peers hapeers store gpc0

    # backends increments frontend gpc0 (sc0_inc_gpc0) on abuse, which we're checking here
    acl source_is_abuser src_get_gpc0 gt 0

    # don't track abuser while it's getting redirected to rate-limit
    tcp-request connection track-sc0 src if !source_is_abuser

    # redirect abuser to rate-limit backend until their entry expires (30s above)
    use_backend rate-limit if source_is_abuser

    use_backend bk_one if { hdr(Host) -i demo1.example.com }
    use_backend bk_two if { hdr(Host) -i demo2.example.com }

backend bk_one
    balance     roundrobin
    server  app1 web.a:80 check
    server  app2 web.b:80 check

    stick-table type ip   size 200k   expire 5m  peers hapeers  store conn_rate(30s),bytes_out_rate(60s)

    tcp-request content  track-sc1 src
    # 10 connections is approxmately 1 page load! Increase to suit
    acl conn_rate_abuse  sc1_conn_rate gt 10
    acl data_rate_abuse  sc1_bytes_out_rate  gt 20000000

    # abuse is marked in the frontend so that it's shared between all sites
    acl mark_as_abuser   sc0_inc_gpc0 gt 0
    tcp-request content  reject if conn_rate_abuse mark_as_abuser
    tcp-request content  reject if data_rate_abuse mark_as_abuser

backend bk_two

    [... same as bk_one, just using different backend servers ...]

backend rate-limit
    # custom .http file displaying a 'rate limited' message
    errorfile 503 /usr/share/haproxy/503-ratelimit.http

12 Jun 2017, 21:40

Geotrust SSL chain + Zimbra

I recently ordered a RapidSSL SHA256 CA cert for one of my Zimbra servers. I had all sorts of trouble getting openssl to verify the complete SSL chain - intermediates, plus CA certs.

The RapidSSL docs provides a link to an SSL bundle [here](https://knowledge.rapidssl.com/support/ssl-certificate-support/index?page=content&actp=CROSSLINK&id=SO28836 however it), however that alone is not sufficient to allow Openssl to completely verify the chain. I downloaded the bundle and put that into file ca_chain.crt, then ran openssl verify but got this error:

# openssl verify -verbose -CAfile ca_chain.crt cert.pem 
cert: C = US, O = GeoTrust Inc., CN = GeoTrust Global CA
error 2 at 2 depth lookup:unable to get issuer certificate

It turns out the bundle supplied by RapidSSL is only intermediates, and does not include the very top level cert. I didn’t realise this at first which caused a bit of confusion. I ended up stepping through each certificate to figure out where the missing link was. I did this by splitting out each cert into a separate file and running openssl x509 -in <certfile> -text -noout and looking at the Issuer: line to see which cert comes next in the chain, then checking that one in turn.

After that exercise I realised I was missing the top level certificate - ‘Equifax Secure Certificate Authority’:

# openssl x509 -in ca.crt -text -noout
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 1227750 (0x12bbe6)
    Signature Algorithm: sha1WithRSAEncryption
        Issuer: C=US, O=Equifax, OU=Equifax Secure Certificate Authority
[...]

I found that here: https://knowledge.rapidssl.com/support/ssl-certificate-support/index?page=content&actp=CROSSLINK&id=SO28589

Once I appended that cert to my bundle file, verify then returned OK:

# openssl verify -CAfile ca_chain.crt cert 
cert: OK

10 May 2017, 21:40

Building LEDE / Openwrt for x86

EDIT: 2018-03-12, LEDE and Openwrt have merged. References to LEDE here can be substituted for Openwrt.

I had a need to run LEDE on x86 hardware. Building a custom LEDE seemed a bit daunting at first, but turned out to be quite straight forward. The build described here is tailored for Qotom J1900 mini PC.

Building the custom image

I chose to build the LEDE x86_64 image within a Docker container like so:

$ docker pull centos
$ docker run -it centos /bin/bash
<container>$ cd root/
<container>$ yum install wget make gcc openssl which xz perl zlib-static ncurses-devel perl-Thread-Queue.noarch gcc-c++ git file unzip bzip2
<container>$ wget https://downloads.lede-project.org/releases/17.01.1/targets/x86/64/lede-imagebuilder-17.01.1-x86-64.Linux-x86_64.tar.xz
<container>$ tar -xvJf lede-imagebuilder-17.01.1-x86-64.Linux-x86_64.tar.xz
<container>$ cd lede-imagebuilder-17.01.1-x86-64.Linux-x86_64

Build the image. I want USB keyboard support, and don’t need e1000 or realtek drivers

 make image packages="-kmod-e1000e -kmod-e1000 -kmod-r8169 kmod-usb-hid kmod-usb3 kmod-usb2"

The new images are located under ./bin/targets/x86/64 inside the build environment

# ls -l bin/x86/64
total 35852
-rw-r--r--. 1 root root  5587318 May  9 20:36 lede-17.01.1-x86-64-combined-ext4.img.gz
-rw-r--r--. 1 root root 19466174 May  9 20:36 lede-17.01.1-x86-64-combined-squashfs.img
-rw-r--r--. 1 root root  2439806 May  9 20:36 lede-17.01.1-x86-64-generic-rootfs.tar.gz
-rw-r--r--. 1 root root     1968 May  9 20:36 lede-17.01.1-x86-64-generic.manifest
-rw-r--r--. 1 root root  2711691 May  9 20:36 lede-17.01.1-x86-64-rootfs-ext4.img.gz
-rw-r--r--. 1 root root  2164670 May  9 20:36 lede-17.01.1-x86-64-rootfs-squashfs.img
-rw-r--r--. 1  106  111  2620880 Apr 17 17:53 lede-17.01.1-x86-64-vmlinuz
-rw-r--r--. 1 root root      731 May  9 20:36 sha256sums

Just need the combined-ext4 image. Copy that out from the docker container to USB flash drive:

$ docker cp <container id>:/root/lede-imagebuilder-17.01.1-x86-64.Linux-x86_64/bin/targets/x86/64/lede-17.01.1-x86-64-combined-ext4.img.gz /mnt

Installing the custom image

  • boot the mini PC using any Linux rescue disk. (I used SystemRescueCD)
  • insert a second USB flash disk containing the image created above
  • write the image to the mini PC internal drive:
$ mount /dev/sdc1 /mnt; cd /mnt
$ gunzip lede-17.01.1-x86-64-combined-ext4.img.gz
$ dd if=lede-17.01.1-x86-64-combined-ext4.img.gz of=/dev/sda
  • (optionally) resize the LEDE data partition to fill the entire size of the internal storage
  • use fdisk/parted to remove second partition (/dev/sda2)
  • re-add second partition with the same starting block as before, but make the end block the last block on the disk
  • save the new partition table
  • run e2fsck -f /dev/sda2 followed by resize2fs /dev/sda2
  • reboot the device
  • access the console via VGA console, or telnet to IP 192.168.1.1 (no root password!)

06 May 2017, 21:40

Open source router

I recently went through the exercise of setting up a gateway router for one of my customers. The choices I had to make were two-fold: hardware & software

Hardware

I wanted to try and find the sweet spot between affordability, processing power, reliability. I could pickup an old desktop PC for next to nothing which would be more than adequate in terms of performance, however I wasn’t confident it would last the distance running 24x7 in a non air-conditioned storage room!

A low power ARM chip on a consumer router (that would support OpenWRT) was my next thought, however these tend to be a little underpowered for what I needed, not to mention very limited in terms of RAM + persistent storage.

I ended up getting a ‘mini pc’ with the following properties:

  • fan-less (heat dissipation via heat sink & aluminium chassis)
  • low power consumption quad-core Celeron J1900 x86-64 CPU
  • 2 GB RAM, 16GB SSD flash (expandable)
  • 4x 1GB ethernet ports

AUD$250 including delivery from Aliexpress. Something the above lacks which others may want is hardware offload for crypto (AES-NI)

Software

This was a harder choice in a lot of ways - there are so many options!! While the hardware I have is capable of running pretty much any Linux or BSD distro, I decided at the outset that I really needed a purpose built firewall distro that includes a web gui interface. I reviewed the following:

pfSense

https://www.pfsense.org/FreeBSD based

Being possibly the best known open source firewall distro available, I felt obliged to check it out. Certainly very slick, and years of constant refinement certainly shine through.

At the end of the day, I feel a certain unease about the future direction of pfSense. The open-source community does seem to be taking a back seat as the public face becomes more corporate friendly.

OPNSense

https://opnsense.org/FreeBSD based

OPNSense is a fork of pfSense and as such is very similar in many ways. Something that really impressed me about the project is the enthusiasm and effort being put in by the core developers. I submitted a couple of bug reports to their Github repo and both were fixed very quickly. The UI is quite different to pfSense as it has been completely reworked, and equally slick and easy to use as pfSense while possibly lacking some of the whistles and bells.

Definitely one to keep an eye on.

IPFire

http://www.ipfire.org/Linux based

I’m afraid I could spare much time for this distro. The web UI is looking very dated. I’m sure it does the job, but without a nicer UI experience, I may aswell just stick to the command line.

OpenWRT

https://openwrt.org/Linux based

OpenWRT is designed for low end, embedded hardware and what they’ve managed to achieve with such limit hardware resources is astonishing! Sadly x86 support is lacking - the prebuilt image I used didn’t detect all CPU cores or available RAM!? - so was crossed off the list pretty quickly.

If you’re after a distro for your wifi/modem/router device, then OpenWRT fits the bill nicely. A word of warning however, the documentation is atrocious! But hey, I’ll take what I can get.

LEDE Project

https://lede-project.org/Linux based

LEDE is a fork of OpenWRT. As such, it’s a younger project which seems to have a more vibrant community than its parent. I had originally passed it over, assuming it would be more or less identical to OpenWRT given how recently it forked. Somebody pointed me back to it citing better x86 support, so I thought I’d give it a spin. I’m glad I did as, this is what I’ve ended up using for my install!

UPDATE 2018-01-03 LEDE and OpenWRT projects merged under the OpenWRT name. Great news!

Conclusion

I ended up going with LEDE for these reasons:

  • runs Linux. I’m simply more comfortable with Linux on the command line which gives me more confidence when things go wrong.
  • is an extremely light weight distro out of the box that offers advanced functionality via an easy to use packaging system
  • a gui that strikes a good balance between usability, feature set and simplicity
  • supports my x86 hardware (unlike OpenWRT)

Update December 2017

I’ve been using LEDE for 6 months and overall very happy with it. There are a couple of issues I’ve encountered worth mentioning:

  • I found the firewall configuration confusing where it talks about ‘zone forwardings’ vs iptables ‘forward’ chain. I wrote this Stack Exchange post to clarify (and remind myself how it works!)
  • upgrading between LEDE releases is far from fool-proof. The upgrade process requires you to upgrade the running system in place. Upon reboot, you’re left holding your breath wondering if it’s actually going to boot! Not something I’d ever want to attempt remotely. Better approaches I’ve seen allow you to load the new software version into a secondary partition that you then flag as being the next place to boot from (Ubiquiti works this way).

22 Apr 2017, 21:40

Docker and IPTables on a public host

NOTE: This post applies to Docker < 17.06

By default docker leaves network ports wide open to the world. It is upto you as the sysadmin to lock these down. Ideally you would have a firewall somewhere upstream between your host and the Internet where you can lock down access. However, in a lot of cases you have to do the firewalling on the same host that runs docker. Unfortunately, Docker makes it tricky to create custom iptables rules that take precedence over the allow-all ruleset that Docker introduces. There is a pull request that promises to help in this regard.

Until the fix is available, [EDIT: fixed in 17.06] the way I work around this problem is as follows:

Create a systemd service that runs my custom rules after the Docker service starts/restarts - /etc/systemd/system/docker-firewall.service:

[Unit]
Description=Supplementary Docker Firewall Rules
After=docker.service
Requires=docker.service
PartOf=docker.service

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/local/bin/docker-firewall.sh

[Install]
WantedBy=multi-user.target

The file /usr/local/bin/docker-firewall.sh is a shell script which simply inserts IPTables rules at the top of the ‘DOCKER’ chain:

#!/bin/bash
#
# called by docker-firewall.service
#
# Work around for controlling access to docker ports until PR#1675 is merged
# https://github.com/docker/libnetwork/pull/1675
#

IPTABLES=/usr/sbin/iptables
CHAIN="DOCKER"

$IPTABLES -I $CHAIN -i eth0 -j RETURN
$IPTABLES -I $CHAIN -i eth0 -s <trusted IP> -j ACCEPT
$IPTABLES -I $CHAIN -i eth0 -p tcp -m multiport --dport <port list> -j ACCEPT

NOTES:

  • you should modify these rules to suit. Put only those ports that you want open to the world in the <port list>, and any trusted IPs that should have access to all ports in the <trusted IP> field.
  • the rules are specific in reverse order, as they are being inserted at the top of the chain. You could instead specify the insert index (e.g. 0,1,2).
  • make sure the shell script is executable (chmod +x).
  • I’ve chosen to RETURN Internet traffic that doesn’t match the first 2 rules, but you may choose to simply DROP that traffic there. Either way, the last rule must take final action on Internet traffic to ensure that subsequent Docker rules don’t allow it in!!

Once those files have been created, you can enable the service with systemctl enable docker-firewall. Now when you restart Docker (or upon reboot), this service will run afterwards and you’ll see your custom rules appear at the top of the DOCKER chain in iptables.

19 Feb 2017, 21:40

Git over HTTP with Apache

I had a requirement at work recently for our Git repo to be reachable via HTTP. I looked at Gitlab however came to the conclusion that it was probably overkill for our situation. I went with the following setup instead which can be run on the most minimal VM e.g AWS EC2 Nano instance.

The instructions were intended for Centos7 however should work without much modification for other distros.

  • Install required packages:
yum install httpd git mod_ssl
  • Ensure that ‘mod_cgid’ + ‘mod_alias’ are loaded in your Apache config
  • Append the following config to /etc/httpd/conf/httpd.conf to force SSL by redirecting non-SSL to SSL:
<VirtualHost *:80>
	ServerName gitweb.example.com
	Redirect permanent / https://gitweb.example.com/
</VirtualHost>
  • Modify /etc/httpd/conf.d/ssl.conf and add this to the default vhost:
SetEnv GIT_PROJECT_ROOT /data/git
SetEnv GIT_HTTP_EXPORT_ALL
ScriptAlias /git/ /usr/libexec/git-core/git-http-backend/

<LocationMatch "^/git/">
        Options +ExecCGI
        AuthType Basic
        AuthName "git repository"
        AuthUserFile /data/git/htpasswd.git
        Require valid-user
</LocationMatch>
  • Create git repo and give Apache rw permissions:
mkdir /data/git
cd /data/git; git init --bare myrepo
mv myrepo/hooks/post-update.sample myrepo/hooks/post-update
chown apache:apache /data/git -R

File post-update should now contain:

#!/bin/sh
exec git update-server-info
  • Create htpasswd file to protect repo:
htpasswd -c /data/git/htpasswd.git dev
  • Update SELinux to allow Apache rw access to repos:
semanage fcontext -a -t httpd_sys_rw_content_t "/data/git(/.*)?"
restorecon -v /data -R
  • Start Apache:
systemctl start httpd
systemctl enable httpd
  • Push to the repo from your client as follows:
git push https://gitweb.example.com/git/myrepo -f --mirror
  • Pull from repo to your client as follows:
git pull https://dev@gitweb.example.com/git/myrepo master

04 Jan 2017, 01:17

Australia and Asylum Seekers

Australia’s Immigration Detention Situation

Australia receives a lot of criticism from all sides, including from within, for its stance on illegal immigration. AFAIK, the current policy for anyone arriving illegally by sea is automatic detention in an offshore facility. The reason for this stance is deterrence - to discourage others from making the same journey. Automatic detention is pretty obvious, but the offshore component is not so obvious. Again, from what I understand the reason for that is due to legal issues. If Australia were to detain immigrants onshore, then those immigrants would have many more options for pursuing legal challenges in Australian courts. When kept offshore, those avenues are denied to them.

Why is it that we have around 1000 people locked up in detention several years after the main influx arrived? As far as I know, the majority of these people have been assessed and found to be genuine refugees (as per UN convention). So why are they still detained? I’ve found it incredibly hard to get any information to answer this question and so this post really doesn’t amount to more than speculation.

From what I understand, Australia is not obliged to settle these people even if they have been assessed as being refugees and in fact, the Australian government has said their policy is to not settle these people under any circumstances! Instead, various deals have been made with other countries to take these people but for various reasons these deals have either fallen through, or the asylum seekers have refused the offer of re-settlement.

We are constantly told by the mainstream media that the detention centres are hell on earth, inhumane etc. The whole issue is so heavily politicized that I find it very hard to trust either side’s narrative. The obvious question remains however - why would an asylum seeker choose to stay in detention rather than be re-settle somewhere?

It has also been reported that asylum seekers have been offered large sums of money to return to their home countries, but almost all have refused. If someone genuinely fears for their life if returned to their home country, then this would make perfect sense, however I find it hard to believe that this is the reality for most of these people. So what other explanation can there be?

The answer to me seems clear: they believe they still have a chance to be settled in Australia, despite all the government’s proclamations to the contrary. And to get into Australia (or any comparable Western democracy) is by far and away the best option possible in their minds - including staying in detention, and enduring whatever hardships exist there. We live in a very connected world, and these people are no exception. They are in touch, via the Internet, with people who have been settled in various place around the world and can tell them first-hand what conditions are like on the ground. Australia scores near the top of the list in terms of a desirability score.

Where is the balance between rule-of-law and compassion?

Clearly Australia must control its borders. It must have the option of denying people entry who do not qualify as genuine refugees.

But what is a ‘genuine’ refugee? There must be a definition that countries can use which holds up to legal scrutiny. The UN refugee convention states that:

A person who owing to a well-founded fear of being persecuted for reasons of race, religion, nationality, membership of a particular social group or political opinion, is outside the country of his nationality and is unable or, owing to such fear, is unwilling to avail himself of the protection of that country; or who, not having a nationality and being outside the country of his former habitual residence as a result of such events, is unable or, owing to such fear, is unwilling to return to it. Refugee Council

I have no idea what the process would be for immigration officials to give someone a score against that definition. What I do know is that the issue of illegal immigration has been with us for a long time, so this isn’t a new problem. I can only hope the experts have their ways and means that don’t involve bribes or coercion.

The Australian Government is in an almost impossible position. How do you assess an asylum claim accurately with little or no documentation, and where the person in question, in a lot of cases, is actively seeking to derail the process from fear that a true assessment might result in an unfavourable outcome for them. If a true assessment can’t be made, what then? The person is detained until such time as:

  • new information comes to light relevant to the asylum application
  • the person voluntarily chooses to return home
  • Australia errs on the side of leniency and grants refugee status regardless.

We have a classic game of brinkmanship. Both sides waiting for the other side to blink: the government waiting for the asylum seeker to give up and go home; the asylum seeker hoping for the politicians to cave in due to political pressure etc.

09 Jul 2016, 13:38

White Australia, Blessing in Disguise

For approximately 100 years (c.1850-c.1950) Australia had a policy of preferring immigrants from Britain and European countries. The origins of the policy are rooted in the gold-rushes of the 19th century, and tensions between the majority white miners (both local and immigrant) and Chinese immigrant miners. In many cases the Chinese miners were more successful than their white counterparts due to their hard work ethic and ability to work cooperatively amongst themselves - traits that hold true today. This success, combined with the social barrier that different culture and language present, caused much resentment from whites leading to protests and riots.

The subsequent restrictions on non-white immigration were later referred to collectively as the ‘White Australia Policy’ (WAP), although this was never the official name. It should also be noted that immigrants of non-white ethnicities were never expelled from the country on the basis of their ethnicity during this period. Many Chinese Australians can trace their ancestry back to the miners of the 19th century.

Today the common narrative in academia and the media is that this policy was a bad thing, a stain on Australia’s history, and that we’ve progressed beyond such primitive and parochial ideas. To the contrary, I think the WAP has been a net positive, and modern Australians owe a debt of gratitude to the political leaders of that era for their foresight and resolve. The WAP allowed Australia to pass through its crucial adolescence years as a nation with one overarching culture - British - with other highly compatible Western European cultures being mixed in. This provided a solid foundation on which to build a national identity - something that modern Australians (of all ethnic backgrounds) can all unite around.

This is not to say that people with white skin or with european ancestry should get any preferential treatment or privileges - not at all!! It is to say however that someone from another cultural background living in Australia should be expected to learn to speak English & adopt the values of western civilisation: respect for individual rights, respect for the sanctity of life, rule of law, treating people fairly irrespective of gender, age, sexuality.

If Australia had not implemented the WAP then it is unlikely Australia would have a national identity that spans the entire continent. As such, it’s unlikely that we could have formed the commonwealth of Australia, and instead may have ended up with smaller nation states, divided by culture and language. In such a scenario, it’s hard to see how we would have withstood military interference from the likes of China or Indonesia, as it’s unlikely we would have the strong military and economic ties we currently enjoy with America.