Would InternetWatch Actually WORK?

0

The Australian Government’s plans to introduce mandatory ISP filtering have caused something of a media frenzy.

Most of the debate has been conducted on either side of an imaginary line between advocates of free speech and critics of censorship on the one side, and organisations which seek to protect children and adults from indecency and harm on the other. This debate has at times descended into a shouting match over statistics and objectives, confusing Australians who are unsure how they feel about the issue.

Supporters argue that the clean feed proposal would merely bring the internet into line with existing regulation on traditional media such as print publications, television and radio. Meanwhile, opposition to the plan is based on a number of concerns, and of these, possibly the most confusing for many people is the claim that the filter simply won’t work.

So what are the technical problems that the opponents are claiming? Are they necessarily deal breakers which make the filtering plan unworkable, or are these arguments just what filter supporters claim they are: an excuse to try and maintain the free availability of pornography?

Nobody knows precisely what screening method would be used if the filter were to go ahead. The Government trials considered several commercial products, referred to by code names, and each of them could use any one of several methods, or a set of the available methods in combination.

One particular filtering method, "DNS poisoning", works by modifying the usual order of business whereby a user at home wants to access a particular website, in effect using their web browser to ask a computer at, say, www.newmatilda.com to send them the front page of their website.

Normally when this process occurs, a computer called a Domain Name Server looks at the address a user has asked for and translates it into a IP address, for example 208.43.129.135. It uses this number to find the website and return the page that the user has requested. With DNS poisoning, the Domain Name Server (DNS server) has a list of websites for which it will deliberately return an IP address which is not correct for the site, misdirecting the request to a Government website so an "Access Denied" page is returned. (A civil liberties question is raised here, as nobody knows yet whether the attempted access to denied content is logged or not, how long records are kept if it is, or whether individuals will be tracked down for questioning.)

This blocking technique is essentially like modifying the mobile phone address book such that criminals’ contact details are listed as the phone number of the authorities; were you to attempt to talk to an authorised person, you would look up their (supposed) phone number in the address book and find yourself shortly thereafter on the phone to the police who would inform you that you are prohibited from talking to that person.

While DNS poisoning is only one likely method the Government is choosing from, each of the available techniques, or a combination of them, is subject to its own significant limitations, including, as we will see shortly, creating further technical problems and impairing speed.

Another of the problems with filtering systems concern the way the software and its designers choose what sites to block. Filtering software typically groups websites into categories such as "news and media", "adult", "violent", etc. Companies that produce the software maintain sub-lists of sites to block within these categories, returning a spurious IP address or simply discontinuing the process of getting the web page. Some categorise and block automatically by setting software to trawl through the internet looking for keywords that identify a site’s type, while some employ staff to do the categorising based on their own browsing or complaints. Usually it’s a combination of the two.

The first technical problem occurs when automatic categorisation is faulty. Computers can’t really objectively look at a website and make a decision about its intent. A site about herpes, for example, will almost certainly contain words that an automatic categorising program will consider suitable only for adults. Similarly, a site about breast health will undoubtedly contain words and images that could trigger a filter.

When a site is miscategorised and blocked, it is referred to as "overblocking", and in the recent trials in Tasmania the software tested blocked between 1-6 per cent of sites it shouldn’t have. This might seem small, but if you look at 100 websites in a given month, between one and six will be inaccessible. It’s inevitable that sooner or later material needed for work purposes or a school report will suffer this fate.

Then there is what’s called "underblocking", which is when a website is not blocked even though it should have been according to the filter’s criteria. This happens often and for a range of reasons, including the failure of the software to screen websites that have very little text (and may use pictures instead of text), or when a single website has multiple names. The Tasmanian trials showed between 88-97 per cent effectiveness, or if you prefer, between 3-12 per cent ineffectiveness. So roughly between one in 10 and one in 20 websites that the filter is designed to block can actually be accessed.

Filter advocates such as Bernadette McMenamin from Childwise maintain that this is all irrelevant, because the aim of the filter is to make children safe, and if it makes them any safer at all, it’s still worth doing. In response, critics have stated that the filter is easily able to be bypassed by people who intend to view prohibited material — and that’s quite true.

If we refer back to the analogy of the address book that protects us from speaking to criminals, we can ask: how would this be circumvented? You’d have several options, including using someone else’s address book, not using an address book at all, obtaining a phone line in another country or ringing an accomplice who is not under your restrictions and ask them to forward your call. All of these can be done to get around the filter.

Several websites exist that when visited, ask for the name of another website. The service then shows the content from the target website on that website, in a window. The filter ignores this, because the only website you asked it for was an approved one, and then you asked the approved one for a prohibited one and were able to view it. This method is called proxying.

Then there is the method that works a bit like not using your mobile phone address book at all — when you simply know the number and key it in yourself. If you type "http://208.43.129.135/" into your browser’s address bar you’ll be taken to www.newmatilda.com without having to go through the normally invisible step of looking up the number. You’re not relying on your directory being accurate because you know the number already. Many filters will be bypassed in this fashion because they are looking for the address www.newmatilda.com, not the resultant number that they’d refuse to translate.

There’s also the option of making a change to your home internet connection so that you don’t use the DNS servers that your internet company provides, using others instead. This is like using someone else’s address book, if the numbers are correct in the other address book, you’ll speak to whoever you wanted to.

There is another method that people use to get around any filter. It is is more complicated, but increasingly accessible. This option is to use a Virtual Private Network. This is a technique used frequently by everyday people who work from the road or at home to access resources at their workplace, and it basically creates a secure "tunnel" to somewhere else on the internet. All browsing is then done via this secure connection and bypasses the filters at your internet company entirely, regardless of how they work.

So filtering is inherently ineffective, it blocks too much, too little, and it’s easy to avoid being blocked entirely, so from a technical point of view, why are people concerned about it?

Let’s think for a moment of the internet as a telephone exchange where we call an operator for each phone call and ask them to connect our call. In this situation we can easily envisage that operator refusing to connect unauthorised calls. This would be fine at a business of even 100 staff — in a system of that size we’d perhaps only wait for a few minutes for the operator to become available to place a call. But what happens if the entire country goes through that operator?

Any filtering software that filters the internet connections of all Australians would need to go through its lists to see whether it can permit access every time an internet user makes a request. That’s thousands upon thousands each second, a truly staggering number of requests per day. I’d feel sorry for any operator in this position.

It’s easy to feel sorry also for staff at internet service providers who need to deal with systems that don’t clearly explain what has been blocked and why. Call centers would be swamped with calls from customers asking why they can’t read a particular news site — only to find it’s been accidentally blocked and the staff need to explain that there is nothing wrong with the internet service itself.

The Tasmanian filtering trial — in simulated situations using groups of 30 users — showed that imposing this process whereby software has to vet internet requests as they’re made, slowed down internet connections by over 75 per cent in two cases, while the other four varied, with the one that had the least impact only slowing by 2 per cent.

The problem is, the ones that were more effective were slower. Worse still, it’s important to remember that it only simulated 30 users under ideal conditions. We can only wonder how much worse it would perform filtering tens of thousands of connections, making mistakes and generating support calls and longer hold times. No other country in the Western world has a mandatory ISP-level filter. The few that have optional ones (such as the United Kingdom) are designed to only protect against somebody accidentally stumbling across a mere few hundred sites. The mandatory filter in China slows and destabilises the country’s connections significantly.

So there it is. Setting the substantial issue of free speech aside, we must be satisfied that the scheme is technically realistic. We now know that mandatory ISP filtering cannot achieve its stated aim of protecting children, nor can it prevent access to prohibited material by determined users.

Proceeding with the filter idea will only lead to higher costs, greater internet unreliability, and lower speeds.

New Matilda is independent journalism at its finest. The site has been publishing intelligent coverage of Australian and international politics, media and culture since 2004.

[fbcomments]