Comments on Mandatory Filtering
and Blocking by ISPs

18 March 2003

"Blocking access to certain Internet material by Internet service providers or 'backbone' providers will be largely ineffective."

CSIRO (Commonwealth Scientific and Industrial
Research Organisation) 1999


On 3 March 2003, The Australia Institute commenced campaigning for mandatory filtering of all Internet access by Australian Internet Service Providers (ISPs). The proposed filtering scheme has been widely misreported in the media as being "opt-out" for adults. This is not factual. The Institute's proposal as documented in their report is that all adults' Internet access be restricted by monitoring, filtering and blocking.

On a number of occasions during the past twelve months, Clive Hamilton, the Executive Director of The Australia Institute and an environmental economist, has expressed his opinion that:

"For all of the hype, the information superhighway is principally a conduit for pornography." (Hamilton, 2002)

Given such a perception, the proposed implementation of a system that, to be effective, would block innocuous and educational information and infringe Internet users' privacy is not surprising.

Contents


Executive Summary

  • Media reports stating the The Australia Institute's proposed mandatory filtering scheme would be opt-out for adults are incorrect.
  • The proposed scheme would facilitate monitoring and tracking of Internet use and infringe Internet users' privacy.
  • The scheme requires the Australian government to hand over censorship decision making responsibilities to artificial intelligence and commercial enterprise.
  • Figures as quoted in The Australia Institute's report indicate the error rate of filtering and blocking software is less than has been reported in recent studies.
  • The report contains assertions about the effectiveness or otherwise of filtering products based on statistics published by the ABA that do not provide grounds for such assertions.
  • In proposing blocking of solely "pornographic material" on the World Wide Web, the report suggests a limited understanding of the nature and extent of the problem of protecting children online.
  • The report fails to recognise that for the blocking scheme to be effective, ISPs would have to block access to a vast amount of innocuous information and useful online resources, such as language translators, privacy and anonymity tools, document archives such as the WayBack Machine and Google's cache, etc.
  • The report contains no information to support the authors' opinions that the costs to the industry and Internet users, in terms of financial outlays and slower Internet access, have been exaggerated.
  • The report provides no indication that The Australia Institute investigated the technical issues and obstacles involved, nor any information to suggest that the findings of the CSIRO, as reported in three government commissioned reports, are incorrect.
  • Claims, as reported in the media, that ISPs benefit financially from downloading of pornographic material indicate failure to consider or understand how ISPs pay for, and charge for, Internet access.
  • Implementation of the mandatory filtering scheme proposed by The Australia Institute would result in significantly greater restrictions on adults' freedom to choose to read what they want. It is difficult to see how this can be perceived to be a 'liberalisation of Internet censorship laws' as claimed by the Executive Director of The Australia Institute (Hamilton, 2003a).


Background: Two Studies of Youth Exposure to Sexual Material on the Internet

The Australia Institute commenced its campaign for mandatory filtering by ISPs with a media release and issue of the results of a Newspoll telephone survey of 200 16-17 year olds. The Institute made a summary of its report 'Youth and Pornography in Australia: Evidence on the extent of exposure and likely effects' available on its website. However, the summary does not contain details of the methodology of the survey, nor a bibliography identifying various other studies referred to therein. According to a representative of the Institute during a telephone inquiry, the full report is only available after payment of AUD$21 and would then be sent from the Institute's Canberra office by postal mail. The Institute issued three media releases and summaries of three reports during three consecutive days.

According to The Australia Institute's summary report:

"Respondents to the survey were first asked, 'When using the Internet yourself, have you ever seen sex sites accidentally or when you didn't mean to?' ... Eighty-four per cent of boys and 60 per cent of girls say they have been exposed accidentally to sex sites on the Internet. It is fair to conclude that anyone who uses the Internet extensively has a high probability of coming across sex sites when searching for something else or being sent pornographic links or images via e-mail." (Flood & Hamilton, 2003a)

The Australia Institute's conclusions and the responses to the questions posed in its survey are particularly interesting when compared with a report released on 25 February 2003 on the findings of a survey undertaken by the Crimes against Children Research Center ("CCRC"), University of New Hampshire, titled "The Exposure of Youth to Unwanted Sexual Material on the Internet" A National Survey of Risk, Impact, and Prevention" (Mitchell Finkelhor & Wolak, 2003). The funding for the CCRC study was provided by the (USA) National Center for Missing and Exploited Children. As remarked in Net Family News on 7 March 2003 "It's apples and oranges to compare the two reports' findings".

The Crimes against Children Research Center interviewed 1507 youth aged 10-17 years. The CCRC study found that:

  • "Twenty-five percent of youth had unwanted exposure to sexual pictures on the Internet in the past year". This 25% figure stands in stark contrast to the 84% figure reported by The Australia Institute, however, the difference may be due to the Institute asking 200 16-17 year olds if they had ever seen sex sites accidentally, while the US study asked 1507 10-17 year olds if they had had "one or more unwanted exposures to sexual pictures while online in the past year" (emphasis added).
  • "Older youth had more exposure than younger youth. In fact, more than 60% of the unwanted exposures occurred to youth age 15 or older. Only 7% of the unwanted exposures were to 11- and 12- year-old youth, and none of the 10-year-olds reported unwanted exposures."
  • "In 13% of incidents (surfing and e-mail combined), the youth said they did know the site was x-rated before entering. (These were all encounters they had described earlier as unwanted or unexpected.)...It is not clear to what extent it was some curiosity or just navigational naivete that resulted in the opening of the sites in spite of the prior knowledge."
  • "Most of the imagery was simply of naked persons, but 32% showed people having sex, and 7% involved violence in addition to the nudity and sex."

Considerably more detail regarding the findings of the Crimes against Children Research Center's study is available in the CCRC's report. It presents the findings of a broader range of survey questions than does the report made available online by The Australia Institute. It also includes information about high risk activities by some teenagers, that apparently contribute to the risk of unwanted exposure, and could be reduced by education of teenagers regarding such activities.

Apparently on the basis of its survey of 16-17 year olds and a number of assumptions related thereto (some of which may be questionable in view of the findings of the Crimes against Children Research Center's study) The Australia Institute decided that all Australians should be forced to use filtering software.


The Australia Institute's Mandatory Filtering Proposal

The Australia Institute's report provide no evidence to suggest that mandatory filtering would be effective in protecting youth and yet seeks to over-ride parental rights to decide on the best means of protecting their children.

Research conducted by the Australian Broadcasting Authority ("ABA") during 2000-2001 found that "eighty four per cent of parents supervised their children's Internet sessions and felt they could trust their children to respect family values in their use of the Internet" and "challenges the popular belief that parents lag behind their children in their interest and proficiency with online technology. Most often the household Internet 'expert' is an adult" (ABA, 2001).

Parents who are Internet 'experts' are very likely to be well aware of the problems inherent in using filtering software. This may account for the reportedly low use of filters in Australian homes notwithstanding that ISPs have been required to make these available to customers since January 2000 and at cost price since May 2002. According to a Telstra spokesperson, the number of customers using a filtering product is as low as 1 per cent of its customer base (The Age, 2003).

An article in the Sydney Morning Herald on 4 March 2003 by Adele Horin misleadingly reported that:

"Under the [Australia Institute's] scheme, Australian internet service providers would be required by law to filter all internet content for pornography. In a reversal of the current system, adults would be able to opt out of - rather than opt for - a filter system." (Horin, 2003)

Similar misreporting of the proposal subsequently occurred in a variety of other media reports.

The Australia Institute's report makes clear that adults would not be permitted to opt out from having their Internet access monitored and filtered at all, and would only be allowed to opt out of resultant blocking of access to a very limited extent. It states:

"...a much more effective method of restricting access of children to Internet sex sites would be to require all Australian ISPs to apply filters to all content. However, end users would have the option of requesting that content to their home or office computer not be filtered -in other words, adults could 'opt out' of filtering. In this case, adult users would be permitted access to websites that have been classified as X-rated.
...
Mandatory ISP-filtering
- All Australian ISPs required to filter all material for prohibited content.
- Adult users may opt out of filtering and receive X-rated content.
- Website owners may apply to have their sites classified and thereby exempted from filtering." (Flood & Hamilton, 2003b)

As stated in an article by Michael Flood, one of the authors of the Australia Institute report, in The Age on 5 March 2003:

"...adult computer users could opt out of filtering, to gain access to websites classified as X-rated using the same system now used for videos and magazines." (Flood, 2003)

In summary, adults would have their access to all content on the Internet filtered, and would only be permitted to opt out of blocking of access to the extent of being permitted to access a miniscule proportion of content on the world-wide Internet that had been pre-classified "X" by the Australian Office of Film and Literature Classification ("OFLC"). As discussed later herein, there will never be more than a miniscule proportion of classified content due the size of the Internet and the costs of classification.

The proposal, on the surface, may sound practical and reasonable. However while The Australia Institute's report contains vast detail about pornography on the Internet, it contains no suggestions whatsoever for overcoming the significant technical and practical difficulties of implementing effective mandatory ISP filtering. It also fails to mention the fact that for the scheme to be effective adults' and childrens' access to a vast amount of innocuous information and online services such as language translators would have to be blocked.


Who would decide what is blocked?

The Australia Institute report recommends that "All Australian ISPs [be] required to filter all material for prohibited content". However, the report does not address the fact that ISPs could not even attempt to do so unless they are firstly told which particular content is "prohibited content" as defined in Australian legislation. In almost 3 years (1 Jan 2000 to 31 Oct 2002), the Australian Broadcasting Authority had only identified and notified ISPs and/or filter vendors of 1287 items of content (284 items of "prohibited content" and 1003 items of "potentially prohibited content"). Blocking only this number of items would not effectively protect children.

The Australia Institute report states:

"It may not be necessary for a government authority to screen content and constantly update black lists [if ISPs are required to install only filters that have been approved]." (Flood & Hamilton, 2003b)

The above appears to indicate that a government authority could screen content and constantly update blacklists. However, such a process would certainly not be effective in preventing access. Given the millions of web pages that could contain prohibited content, it would be physically impossible for a group of humans, even a very large group, to find and classify all potentially prohibited content, or even a large proportion of it. The popular search engine Google provides Internet users with immediate access to over 3 billion Web documents. As well, Google provides access to a 20 year Usenet newsgroup archive consisting of more than 700 million messages.

While Dr Clive Hamilton of The Australia Institute remarked on ABC Radio National on 6 March 2003 that it is easy to find pornographic material and one only needs to type sex pictures into a search engine (Hamilton, 2003b), it should be noted that blocking all 3,200,000 pages that result from such a search on e.g. Google (as at 17 March 2003) would also block, for example, newspaper articles reporting Dr Hamilton's remarks.

Mandating effective filtering by ISPs would leave ISPs with no alternative other than to purchase and install blocking software, most of which is made and sold by American filtering companies. These vendors, like ISPs, cannot know what content is "prohibited content" under Australian law unless the content has been classified by an Australian government authority and the filter vendors notified accordingly.

Hence if ISPs were required to install such filters, the decision as to what is blocked would be left principally to artificial intelligence (software "guessing" engines) and the opinions of employees of filter vendors. Thus, in practice, The Australia Institute's proposal requires the Australian government to hand over censorship decision making responsibilities to American business (or give a monopoly to the one Australian filter vendor currently on the approved list).

Irrespective of the location of filter vendors, Australian government classifiers often disagree over what should, or should not be censored, and their decisions are from time to time highly publicly controversial. It is ludicrous to propose that Australians' freedom to read should be restricted by decisions made by artificial intelligence and the opinions of commercial filter vendors. Existing Internet censorship legislation requires classification decisions to be made by Australian government censorship agencies. This role should not be handed over to businesses, either ISPs or filter vendors.


Effectiveness of Filters

The Australia Institute report states:

"The success or failure of the proposed system depends critically on the effectiveness of filtering." (Flood & Hamilton, 2003b)

and

"If ISPs are required to install only filters that have been approved, and approval of a filtering device is contingent on its having a high degree of effectiveness, then in order to retain official approval the filter makers would have a strong incentive to ensure that prohibited content is included on their lists as soon as possible." (Flood & Hamilton, 2003b)

It is difficult to see how mandatory filtering by ISPs would provide any greater incentive to filter makers to ensure prohibited content is included on their lists than they currently have when competing with other vendors of currently approved filters for sales in the open market. It seems equally likely they would have less incentive after their filter had been purchased and installed by ISPs on government demand given potential lengthy delays in disendorsement of an ineffective approved filter.

For example, although a CSIRO study commissioned by the ABA prior to November 2002 found several 'approved' filters failed to block a significant proportion of content the ABA had notified the filter vendor about, those filters remain approved in March 2003. This type of problem would need to be resolved. If it was dealt with in relation to the existing scheme, parents may have more confidence in choosing/using an 'approved' filter without any need for mandatory ISP filtering which would result in less, if any, consumer choice as to the most suitable filter for their family's needs.

Filter Overblocking and Underblocking

The Australia Institute report asserts:

"Filters vary in their effectiveness. The ABA provides data on the effectiveness of scheduled filters, measured by their rates of failure to block content identified by the ABA as prohibited. The results are reproduced in Table 1. While some filters are very effective others, notably Cyber Sentinel, Interscan Web Manager and NetNanny 4.0, are so ineffective as to be almost useless as devices for filtering out pornography on the Internet. The ABA has indicated that it will recommend their removal from the schedule (ABA 2002, p. 7)." (Flood & Hamilton, 2003b)

There is no evidence in the Australia Institute's report to support the claim that the above mentioned filters are "so ineffective as to be almost useless as devices for filtering out pornography on the Internet", nor did the ABA make such a claim in the ABA report referenced by The Australia Institute.

The figures in Table 1 provide no indication whatsoever about the filters' overall effectiveness or otherwise, nor does the ABA's Table 1 purport to do so. The ABA

"commissioned the CSIRO to test that scheduled filters block content that has been the subject of notifications by the ABA. Each scheduled filter was tested using some 200 URLs that were notified by the ABA in the period 1 June to 30 September 2002. The results of the tests are summarised in Table 1." (ABA 2002)

Hence, the figures only show that some approved filter vendors had not added all items of content notified to them by the ABA during a four month period to their blacklists. While this is a serious problem in relation to products remaining approved, the filters that failed to block some ABA notified content may nevertheless block many thousands of other items of pornographic material. As mentioned earlier herein, in almost three years to 31 October 2002, the ABA had only notified filter vendors of approximately 1300 items of content. Furthermore, filters that did block most of the content notified to them by the ABA may have a low success rate in blocking the many thousands of pornographic pages that have never been identified by the ABA and/or may also incorrectly block a large amount of material that is not pornographic.

For example, the ABA's Table 1 referred to in The Australia Institute's report stated the Internet Sheriff product failed to block 8% of items notified to them by the ABA, one of the lowest rates of failure in that regard. However, a report on tests undertaken by the CSIRO indicated that Internet Sheriff has a high rate of incorrect blocking. According to the CSIRO report, "TelnetMedia [developer of Internet Sheriff] set up a proxy-server with the following categories blocked: Drugs illegal, Vilification and intolerance, Antisocial and offensive, Adult sex (including Adult sex related, and Sex non-explicit). Other adult content categories, such as Adult Nudity, were allowed" (CSIRO, 2001). The CSIRO's report shows, however, that while Internet Sheriff blocked approximately 91% of "pornography/erotica", it also blocked a significant percentage of material that apparently should not have been blocked with the particular settings, for example:

  • Filtering Information 15%
  • Drug Education 18%
  • Medical/Health 26%
  • Sexual Health 33%
  • Sex Education 25%
  • Contraception 20%
  • Abortion 9%
  • Anti-racism/hate 8%
  • Politics 2%
  • Gay Rights/Politics 50%
  • Free Speech 15%
  • Art/Photography 43%
  • Nudism 27%
  • Swimsuit models 77%
  • Glamour/Lingerie models 80%

(Note: The percentages above are approximate within 1-2% as the CSIRO report shows the test results in bar charts.)

The Australia Institute report states:

"It must be acknowledged that filters can never be perfectly effective. All filtering technologies make errors of omission and commission - they 'overblock' legitimate materials, and they 'underblock' inappropriate materials. Parameters can be adjusted to reduce the occurrence of one type of error, but this always results in increasing the other type of error." (Flood & Hamilton, 2003b)

and

"A test of six popular blocking products found that at the most restrictive settings, 24 per cent of adolescent health information sites and 91 per cent of pornography sites were blocked, while at the least restrictive settings, 1.4 per cent of health sites and 89 per cent of pornography sites were blocked. This suggests that content filters can be set to less restrictive settings with no significant loss in their ability to block pornography (Larkin 2002)." (Flood & Hamilton, 2003b)

The above comments in the report refer to a study undertaken by the Kaiser Family Foundation (not Larkin as appears to be indicated above) and may provide a mistaken impression about the findings of the research. The findings were released on 10 December 2002 in the Kaiser Family Foundation's report titled See No Evil: How Internet Filters Affect the Search for Online Health Information.

The 1.4% figure concerning health sites incorrectly blocked is an average across a variety of types of health sites including, for example sites providing information about diabetes, which are less likely to be incorrectly blocked than sites containing information about other topics such as safe sex. As the Kaiser Family Foundation states in the Executive Summary of its report:

"Even when set at their least restrictive blocking configurations, filters block an average of about one in ten non-pornographic health sites resulting from searches on the terms 'condoms,' 'safe sex,' and 'gay.' At the intermediate level of blocking, a substantial proportion of health sites resulting from searches on some sensitive topics are blocked:
Condoms <27% of health sites were blocked>
Ecstasy  <25% of health sites were blocked>
Gay       <24% of health sites were blocked>
Safe sex <20% of health sites were blocked>
...
Organizations providing information on sexual health are even more likely to have their Web sites blocked by some filtering product. For example, one in every three of the safe sex health sites studied (33%) were blocked by at least one filtering product at the least restrictive setting, one in two (49%) were blocked by at least one filter at the intermediate level, and more than nine in ten (91%) were blocked at the most restrictive setting."

The Foundation also reported that filters can reduce but not eliminate accidental exposure and that:

"Across the 24 health-related searches conducted on six different search engines, 1% of the results contained pornographic content. At the minimal blocking configurations, the products blocked an average of 62% of these 'inadvertently' retrieved porn sites. Thus, the products were noticeably worse at identifying pornography resulting from health searches than from deliberate searches for pornography (87% blocking rate)."
In summary, the study found that when the filtering products were set to their least restrictive setting:
  • an average of 10% of health sites resulting from searches on the terms 'condoms', 'safe sex', and 'gay' were incorrectly blocked;
  • an average of 13% of pornographic sites were not blocked during deliberate searches for pornography;
  • an average of 38% of pornographic sites were not blocked when this material was inadvertently retrieved when searching for health sites.

and when filtering products were set to their most restrictive settings the quantity of blocked health sites increased substantially (e.g. condoms 55%, safe sex 50%, gay 60%, pregnancy 32%) with only a minor (4%) improvement in their ability to block pornographic sites.

It should be noted that the Kaiser Family Foundation's study concerned blocking of health sites only, and with a focus on those likely to be of interest to teenagers, it did not investigate the incidence of incorrect blocking of other types of sites. Detailed information on the findings and research methodology is available in the Kaiser Family Foundation's report.

Information and examples of the many other types of sites that have been found to be incorrectly blocked by filtering products is available in Internet Filters: A Public Policy Report issued by the Free Expression Policy Project in late 2001. The report summarises more than 70 tests and studies of filtering products and documents massive over and under blocking by major filtering products.

Although filter makers claim their products have improved in recent years, reports evidencing incorrect blocking and the problems of using filtering software continue. Recently, for example, in February 2003 a new e-mail filtering system at the UK House of Commons was found to be "stifling debate among MPs over serious parliamentary business such as the Sexual Offences Bill". Sections of the Bill sent to parliamentary e-mail addresses were being blocked (BBC 2003).

Customisation of Filters

The Australia Institute reported that:

"[V]endors usually offer customisation options, for example, the capacity to block only particular categories of inappropriate content, to create local exceptions lists, or to allow degrees of filtering based on a child's age or grade." (Flood & Hamilton, 2003b)

While the above is correct regarding the usual offerings of vendors, the Australia Institute's proposal for mandatory filtering by ISPs would not make these options available to any greater extent than currently. The Australia Institute's proposal is to block "pornographic material" only and as such it does not involve customisation options based on the age of children.

The Australia Institute report states:

"it should be noted that the filtering system proposed applies only to pornographic material on the worldwide web."
(Flood & Hamilton, 2003b)

The above suggests The Australia Institute has a very limited understanding of the nature of the problem of protecting children online. Pornographic material is available over the Internet from a number of easily accessible sources other than the World Wide Web. These include, for example, file servers (FTP), peer to peer networks, files distributed during Internet Relay Chat (IRC) sessions, and in Usenet newsgroups. In fact, in the 18 month period January 2000 to June 2001, over 50% of the items of "prohibited content" deemed Australian-hosted by the ABA were found in Usenet newsgroups, not on the World Wide Web. These items comprised the majority of those considered sufficiently serious to refer to Australian police. It is pertinent to note that if adults' access to overseas pornographic sites is blocked, persons wishing to access such material may resort to Usenet newsgroups where they would have a considerably higher risk of being exposed to child pornography than when accessing content on the World Wide Web.

The Australia Institute report does not provide a definition of "pornographic material" although it seems clear from the numerous examples of material cited that the Australia Institute intends this would only involve material depicting sex, in particular material that would be classified "X" and that "Refused Classification" (classified "RC") in instances where the latter material concerns sexual activity, such as sexually explicit depictions of sexual violence, etc.

Hence, the filtering system would not be designed to control access to the vast range of non-sex related content on the Internet that parents may wish to prevent their children from accessing, e.g. sites promoting use of illegal drugs, promoting racial hatred, and so on. The proposal also does not address the range of material that would be classified "R18" in Australia, a proportion of which contains depictions of rape and other forms of sexual violence, that are completely banned in the "X18" classification that the Australia Institute specifically seeks to have blocked by its mandatory filtering proposal.

The result of implementation of the Australia Institute's mandatory filtering proposal is likely to be two-fold insofar as its ability to assist parents to protect their children online is concerned:

  • parents who wish to use blocking/filtering software to protect their children from a wider range of content would have to pay twice for filtering software. Once by way of increased Internet access fees paid to ISPs (covering the ISP's costs in installation of computer equipment, and purchase and maintenance of filtering software that attempts to block "pornographic material") and again in purchase of filtering software for installation on the home computer which attempts to block the much broader range of content that is or may be unsuitable for children in various age groups.
  • parents unfamiliar with the Internet and/or the wide range of content accessible thereon are likely to acquire an even greater false sense of security than the existing Commonwealth legislation provides. They would be likely to be exposed to government claims that the Internet is safe for children because ISPs filter all content, without detail about the limitations of the filtering system.


Circumventing Filters

The Australia Institute report notes that:

"In addition, filters can be circumvented or defeated in various ways: children may disable or uninstall them, go around them (e.g. through a proxy server)..." (Flood & Hamilton, 2003b)

The above may be one of the reasons that the Australia Institute has proposed that filters be installed on ISPs servers, that is, so that the end user cannot uninstall the filtering software. However, if so, it would suggest that the Australia Institute may not be aware of the range of countermeasures, circumvention methods and difficulties associated with effective filtering even when the filter is installed on an ISP's system.

As reported by the CSIRO:

"One of the problems with blocking Internet content is that the filters are easily bypassed unless severe, and probably commercially unacceptable, restrictions are placed on the services offered. The OzEmail service mentioned [in the CSIRO report] is 'safe' but achieves this by severely limiting access to the possible range of Internet services. [For example, it is not possible to run a Java applet that opens a connection back to an application running on its Web server, such as might be used for Internet banking.]

Simpler measures, such as filtering Web pages to disallow access to specific sites and pages, are subject to simple countermeasures. ...

Web access is usually via TCP port 80 but a Web server can run through any port number quite easily. URLs can specify the port number to be used so it is trivial for a Web server to avoid filters that are looking for Web access only on port 80.

Sites that are on black lists can easily avoid their banning by changing their name or IP address, by adopting aliases or by redirecting traffic from other, non-banned, sites. The constant indexing of the Internet makes changing site names more feasible as the new names will soon be picked up through the search engines and then made available to inquisitive users." (CSIRO 1999b)

If the Australia Institute's objective is the implementation of an effective filtering system that cannot be easily circumvented, the filter would have to block access to a vast amount of innocuous information and useful online resources. This includes, for example, language translators, privacy and anonymity tools, archives of documents like the WayBack Machine, Google's cache, search of Google's image archive, etc. These are blocked by, for example, N2H2 which is on the Australian scheduled list, because they are regarded as "loopholes". Of course, this results in massive overblocking.

In other words, for the Australia Institute's idea to be effective, ISPs would have to prevent Australians from undertaking a range of innocuous activities such as translating a page from say French to English, because the same tool can easily be used to get round a filter that does not block the translation service.

For more information in this regard see, the following studies:


Technical Obstacles

The Australia Institute report asserts:

"There are no serious technical obstacles to an ISP-based opt-out filtering system." (Flood & Hamilton, 2003b)

However, the report provides no suggestion that the Australia Institute has investigated the technical issues and obstacles involved, nor does it mention the CSIRO's finding that:

"Blocking access to certain Internet material by Internet service providers or 'backbone' providers will be largely ineffective". (CSIRO 1999c)

Detailed information regarding the above finding is contained in the CSIRO's report Technical Aspects of Blocking Internet Content, April 1999, which included a case study of the performance degradation caused by effective Internet filtering. It followed a previous CSIRO report titled Blocking Content on the Internet: a Technical Perspective, June 1998. Both reports were commissioned by the (government) National Office for the Information Economy.

Although the Australia Institute's report does not provide any information to suggest the authors have investigated or understand the technical issues, an article in the Australian Financial Review on 11 March appears to shed some light on the background to some of The Australia Institute's opinions:

"Hamilton [Executive Director of The Australia Institute] has undertaken his own technical review of filtering software with the assistance of IT software company Telnet Media, the developer of a filtering software package that had one of the lowest failure rates in recent tests by the ABA.

A letter sent to Alston and Prime Minister John Howard by a product manager at Telnet, Fraser Larcombe, challenged the view held by many ISPs that making them responsible for filtering all internet content would slow down the internet for everyone." (Osman, 2003)

Telnet Media are the developers and vendors of the "Internet Sheriff" censorware product. (As mentioned earlier herein, the recent tests by the ABA did not investigate whether or not products such as "Internet Sheriff" had a low failure rate in blocking access to pornographic material on the Internet, as is suggested in the above article).

Telnet Media (previously Clairview Internet) were prominent in the debate concerning mandatory filtering by ISPs in 1999. They appeared before the Senate Select Committee on Information Technologies on 3 May 1999 during the Committee's public inquiry into a Bill involving mandatory filtering by ISPs. During testimony before the Committee, the representatives criticised various aspects of the CSIRO's findings and report and demonstrated Internet Sheriff. The bulk of their testimony appeared designed to give the Committee the impression that Internet Sheriff was an ideal product for mandatory installation by ISPs. However some Committee members asked a number of questions indicating they were not gullible to claims made by filter vendors. For example:

"Senator Calvert [Liberal Party] - You have told us about how your technology works basically...[b]ut one of the things that has been put to us in evidence is that blocking techniques might have the effect of blocking out wanted material rather than unwanted material. With your technology, is that possible?"

Mr Jones [Chief Executive Officer, Clairview/Telnet Media] - My oath it is. You can make two mistakes. You can make mistakes whereby you accidentally block material out versus mistakes whereby you let material through. Will any blocking philosophy make mistakes? The answer is yes." (Senate Hansard, 1999)

Subsequent testing of the Internet Sheriff product demonstrated that if such technology were to be mandated by any government for use by adults, either large portions of the Internet would be frequently and incorrectly blocked, or pornographic material would often be let through, as detailed in EFA's Report Clairview Internet Sheriff: An Independent Review, May 1999.

According to information on Telnet Media's web site as at March 2003, it appears the artificial intelligence technology of Internet Sheriff has not changed substantially, if at all. It may have since been "fine-tuned" to avoid unintelligent blocking of every page on a web server such as OzEmail's, but there is nothing to suggest the technology would not still block a significant amount of non-pornographic content. Indications are to the contrary, according to the CSIRO's findings referred to earlier herein and a Sydney Morning Herald report dated 4 March 2003:

"Advertising agency George Patterson Bates has used Internet Sheriff to filter unwanted content from its 500 employees for almost two years.
Network manager Chris Robinson said the system has been almost 100 per cent successful in filtering out unwanted content, but there have been some problems with over-filtering - blocking desired content." (Lowe, 2003)

Given the way in which the Internet Sheriff technology operates, it is unlikely the full extent of over blocking would become apparent in a business environment of 500 employees, as distinct from if used by the thousands of adults and children in Australian homes with widely varied interests.

Information regarding other technical issues is below.


Slowdown in Internet access speed

The Australia Institute says:

"In our view...the costs to the industry and Internet users, in terms of financial outlays and slower Internet access, have been exaggerated. (Flood & Hamilton, 2003b)

and

"Transmission of data through ISPs may slow down a little initially but it is to be expected that ISPs and filter makers would soon find ways of minimizing any disruption to information access. In fact, some of the biggest ISPs, such as AOL, already filter content for some customers (Thornburgh & Lin 2002, p. 272)." (Flood & Hamilton, 2003b)

The Australia Institute report presents no basis for their expectation that filter vendors would soon find ways of minimising disruption to information access merely because a government mandated installation of their product by ISPs. Moreover, it appears the Australia Institute is not aware of the significant difference between ISPs filtering content for "some customers" and for all customers, as detailed in the CSIRO's reports:

The CSIRO also addressed this matter in their 2001 report, Effectiveness of Internet Filtering Software Products. Extracts from the report are below.

"The most common server-based filtering technology is based on proxy servers. Proxy servers sit in the path between the user and the Internet and can examine all requests and returned content on their way through. ... All clients must go through this proxy server to be able to access the Internet 'proper'.

Proxy servers have to handle large numbers of requests every second and will not normally have enough processor time available to run slow, content-based, filtering tools."

"The primary disadvantages of server-based filtering come from the scale of the task they face. Home filtering products are only working for one user and can afford to spend considerable (in a technical sense) time on checking requests and Web content. User response times will not be adversely impacted by a filtering product taking 0.1 second to examine content as it passes through. The same 0.1 second of processor usage would be unacceptable to an ISP serving hundreds or thousands of concurrent users."

"ISPs often provide their clients with both filtered and unfiltered service to overcome these problems. The optional filtered service provides a 'safe' but restricted environment to some of their customers, and the unfiltered service provide fast, low-cost access to everyone else. Making the filtered service optional will reduce costs because it will only ever be used by a subset of customers and so can make use of smaller filtering computers."

In other words, filtering software on ISPs systems results in a traffic jam, much like a car accident on a busy road. Internet traffic banks up waiting to get past the choke point caused by the time the filtering software takes to check each item of content passing through the ISPs' system. It should also be noted that a Web page is rarely one item of content. Pages often contain many images and every image is a separate piece of traffic that the filter must screen.

The Australia Institute's expectation of minimal disruption to access speed is also contrary to recent experience in China. As reported in The Australian (China censorship slows net) on 7 March 2003:

"China's internet users are suffering sharp slowdowns in access because of the communist government's heightened efforts to police online content, industry experts say. ...
China is trying to reap the internet's benefits while also controlling what its people read and hear. Authorities have invested both in spreading internet access and in installing technology to scan websites and email for content deemed subversive or obscene..."

Increase in Internet access costs

The Australia Institute report provides no grounds for the claim that "the increase in the cost of an Internet account is likely to be small" as a result of ISPs installing and operating filters.

While the report claims that ISP installation of blocking systems known as "white-lists" entail fewer financial costs than the use of server-side filters, whether or not that is correct, the proposal would require the use of server-side filters, not a "white-list" system. The Australia Institute report states:

"Some types of ISP-based filtering have proven very effective. 'Because they confine the user only to material explicitly considered appropriate, child-oriented content-limited ISPs provide the greatest degree of protection for children' (Thornburgh and Lin 2002, p. 285). ... As Thornburgh and Lin (2002, p. 293) note, at least in the case of child filters:
    'Use of content-limited ISPs appears to entail fewer financial costs than the use of server-side or client-side filters. ...the costs of filtered ISPs for this class of users will be relatively small.'" (Flood & Hamilton, 2003b)

The reason "the costs of filtered ISPs for this class of users will be relatively small" is because, as discussed earlier herein, the number of users would be relatively small. A "content-limited ISP" is an ISP who offers a service that provides access only to a small range of pre-approved content on the Internet. These services are commonly referred to as "walled gardens" or "white lists" and are normally designed for limiting the access of young children. These systems block, by default, access to all web pages that have not been reviewed by a human and determined to be suitable. Several white-list services are listed among the Australian 'approved filters', for example, Kidz.net, Too C.O.O.L. and AOL Kids Only (12 and under).

As stated by the CSIRO "More effective blocking might be carried out using a 'white list' of approved material, but creating and maintaining such a list is expensive and takes a lot of time and effort." (CSIRO 1999c)

White-lists are not designed for the type of filtering system proposed by the Australia Institute. Their report does not provide any information concerning the cost of the server-side filtering systems that would be necessary to implement their proposal. Moreover, it does not mention, nor provide any information contrary to the CSIRO's report: Technical Aspects of Blocking Internet Content concerning the "very considerable investment in computing and network equipment" necessary by ISPs "if user response times are not to be severely degraded".


Privacy - tracking, logging, reporting Internet use

Implementation of ISP based filtering would provide ISPs with a simple means of tracking, logging and issuing automated reports about their customers' use of the Internet. As noted by the CSIRO in their report Effectiveness of Internet Filtering Software Products:

"Auditing and tracking access is an important feature of many Internet filtering products. Rather than just blocking access to 'unacceptable' content or sites, they also securely record the attempted access for later review by parents or managers." (CSIRO 2001)

For example, in the case of Internet Sheriff according to Telnet Media:

"Over thirty separate detailed and summary report format templates are defined. ...
Reports are available to analyse blocked access attempts and allowed accesses by users and user groups...
The administrator is easily able to drill-down to identify patterns of usage department by department or even to examine the surfing activity of individual users..."

This privacy invasive 'feature' of filtering products would be readily available to ISPs. While most ISPs are unlikely to be interested in their customers' activities, the potential for mis-use or inadvertent disclosure of details of customers' Internet usage presents a significant threat to Internet users' privacy.

According to The Australia Institute:

"Under the proposed 'opt-out' system, the civil liberties of computer users would be protected as any adult user could gain access to X-rated material on the Internet by way of a simple communication with their ISP." (Flood & Hamilton, 2003b)

This aspect of the proposal would not protect computer users' civil liberties, it would infringe same. Requiring users to tell an ISP what type of content they wish to access breaches the fundamental human right to privacy. Furthermore, it is highly doubtful that most people who wish to access X-rated material in the privacy of their own home would be willing to tell a stranger, i.e. an ISP staff member, that they wish to access controversial content that some people find offensive.


Cost of classification of web pages by the OFLC

The Australia Institute report states:

"Website owners may apply to have their sites classified and thereby exempted from filtering." (Flood & Hamilton, 2003b)

This aspect of the proposal fails to consider the exorbitant cost of classification of web pages. The OFLC charges $510 for classification of a single web page (Alston, 2002). This fee is approximately five times the amount charged for classification of an entire magazine ($130), and is not significantly less than the classification fee for an entire 2 hour video ($650).

It is questionable whether the adult industry in Australia would find it commercially viable to pay $510 for classification of each web page, particularly given the relatively lower fees for classification of the larger quantity of content in magazines and videos.

It seems most unlikely that overseas content providers would be willing to pay $510 per page in order to reach the relatively small Australian market.

Hence although The Australia Institute proposes that adults be permitted to access online content classified X by the OFLC, it is doubtful there would be any such content.

The high cost of classification of web pages has been drawn to the attention of the Minister for Communications and the OFLC on a number of occasions during the past three years, for example, in Questions on Notice to the Minister for Communications in the Senate on 5 April 2001. However, to date, the government has declined to reduce the fees.


ISPs' source of profit

In a media release dated 4 March 2003, Senator Brian Harradine claimed:

"Telstra and the ISPs benefit financially from the current system where they continue to earn substantial money from the bandwidth charges for the pornographic images downloaded." (Harradine, 2003).

ZDNet reported on 4 March that:

"Hamilton [Executive Director of The Australia Institute] also believes ISPs are resisting their call for the implementation of industry-wide filtering because it would eat into their profits.
'The Internet industry has got to be brought back to the real world; they're just like any other industry but they've got away with murder because they've convinced politicians that they've saved the world,' he said." (Gray, 2003)

Similarly, on the same day, the Sydney Morning Herald reported:

"Fraser Larcombe, product manager for the Brisbane-based developer of Internet Sheriff software, says ISPs do not have [an incentive to block access to content], and that it was against their interest.
'ISPs don't want people using very effective filters,' he said. 'They want people to be downloading as much information as possible - that's how they make their money.'" (Lowe, 2003)

Filter makers who make such claims have a commercial incentive in promulgating same since they would profit from sale of their products to ISPs if the government forced ISPs to purchase same.

Moreover, people who make claims like those above appear not to have considered how ISPs pay for, and charge for, Internet access.

ISPs pay other service providers for the amount of information/data downloaded by their customers. These costs (bandwidth charges) are passed on to the ISP's own customers. Most ISPs charge their home user customers a fixed price per month for either an "unlimited" amount of data downloaded, or a quantity up to a pre-defined limit. This means that the ISP receives the same amount of money from a particular customer, irrespective of whether that customer downloads a large amount of data, or none. However the bandwidth charges the ISP pays to its upstream provider depend on actual bandwidth use, that is, the ISP pays less in relation to customers who download very little data, although they receive the same income from that customer as from those who download a large amount of data.

Hence, ISPs who charge fixed pricing make less profit from customers who download large amounts of data, than from those do not. Some, probably many, ISPs in fact make a loss from customers who download large amounts of data. This is because they set their fixed pricing charges at an average price, knowing from experience that only a proportion of their customers will download a large amount, instead of charging all customers the higher price that would be applicable if all customers downloaded the maximum allowed for the fixed price.

Blocking access to pornographic images would not reduce Internet access fees because there are many types of Internet content that involve even higher bandwidth use than images, such as music files, audio and video streaming, etc.


Conclusion

The issue of contentious material on the Internet is far from novel. Governments, Internet engineers, users, ISPs, civil libertarians and religious zealots have been locked in vigorous and often acrimonious debate on the question around the world for almost a decade.

In Australia the debate has been raging since at least 1994 when the Commonwealth sought submissions to the Computer Bulletin Board Services Task Force. However, the only countries that have chosen the path of mandatory filtering and blocking are totalitarian regimes such as China.

Since at least 1995, the debate has been fueled by sensational media reports accepting at face value the findings of studies containing serious conceptual, logical and methodological flaws and errors. A widely known example is the July 1995 Time Magazine cover story 'On a Screen Near You: Cyberporn' which reported on the subsequently discredited Martin Rimm study. Two weeks later, Time Magazine published a retraction of sorts.

As Professors Donna L. Hoffman & Thomas P. Novak of Vanderbilt University remarked in critiques of the Time report and Rimm study:

"The critically important national debate over...rights and restrictions on the Internet and other emerging media requires facts and informed opinion, not hysteria...Misinformation, when propagated, begets even worse misinformation." (Hoffman & Novak, 1995)



References

Age The, 2003, NetNanny filter fails the test, 4 March 2003

Alston, R. 2002, Answer to Question on Notice, Information Technology: Internet Content (Question No. 223), Senate Hansard, 17 June 2002
Alternate URL: http://www.efa.org.au/Publish/qons-bsa020617.html

Australian Broadcasting Authority 2001, Internet @ home

Australian Broadcasting Authority 2002, Submission to Review of Operation of Schedule 5 to the Broadcasting Services Act 1992, November 2002

Australian, The 2003, China censorship slows net, 7 March 2003

BBC 2003, E-mail vetting blocks MPs' sex debate, 4 Feb 2003

CSIRO (Commonwealth Scientific and Industrial Research Organisation) 2001, Effectiveness of Internet Filtering Software Products, Report prepared for the Australian Broadcasting Authority, September 2001 (released by ABA, March 2002) [PDF 1591 Kb]

CSIRO (Commonwealth Scientific and Industrial Research Organisation) 1999a, Access Prevention Techniques for Internet Content Filtering, Report prepared for the National Office for the Information Economy, December 1999

CSIRO (Commonwealth Scientific and Industrial Research Organisation) 1999b, Technical Aspects of Blocking Internet Content, Report prepared for the National Office for the Information Economy, April 1999

CSIRO (Commonwealth Scientific and Industrial Research Organisation) 1999c, Content Blocking On The Internet, Media Release, 14 April 1999

CSIRO (Commonwealth Scientific and Industrial Research Organisation) 1998, Blocking Content on the Internet: a Technical Perspective, Report prepared for the National Office for the Information Economy, June 1998
Alternate URL: http://www.cmis.csiro.au/projects+sectors/blocking.pdf

Electronic Frontiers Australia Inc. 1999, Clairview Internet Sheriff: An Independent Review, May 1999

Flood, M. 2003, How we can protect our children from internet porn, The Age, 5 March 2003

Flood, M. & Hamilton, C. 2003a, Youth and Pornography in Australia: Evidence on the extent of exposure and likely effects, Summary of Australia Institute Discussion Paper No. 52, February (issued 3 Mar 2003).

Flood, M. & Hamilton, C. 2003b, Regulating Youth Access to Pornography, Australia Institute Discussion Paper No. 53, February.
Note: References in this document to the above report refer to the contents of the full version of The Australia Institute's report. The Institute has not made its full report available online, only the following short summary:

Flood, M. & Hamilton, C. 2003c, Regulating Youth Access to Pornography, Summary of Australia Institute Discussion Paper No. 53, February (issued 4 Mar 2003).

Free Expression Policy Project 2001, Internet Filters: A Public Policy Report, Fall [USA] 2001

Gray, P. 2003, AU ISPs hose down call for regulation, ZDNet Australia, 4 March 2003

Hamilton, C. 2002, Admit it: the left has lost its way, The Age, 14 May 2002 (et al)

Hamilton, C. 2003a, Kids' exposure to porn must be curbed, The Canberra Times, 7 March 2003

Hamilton, C. 2003b, Exposure to Internet Pornography, Interview, Life Matters, ABC Radio National, 6 March 2003

Harradine, B. 2003, Get serious about protecting children from Internet porn predators: Harradine, Media Release, 4 March 2003

Hoffman, D.L. & Novak, T.P. 1995, The Cyberporn Debate, eLab:Research for a Digital World, Vanderbilt University, July 1995

Horin, A. 2003, No sex, please: a blueprint for safer surfing, Sydney Morning Herald, 4 March 2003

Kaiser Family Foundation 2002, See No Evil: How Internet Filters Affect the Search for Online Health Information, 10 December 2002

Lowe, S. 2003, Net Nanny 'a part-time supervisor', Sydney Morning Herald, 4 March 2003

Mitchell, K.J. Finkelhor, D. Wolak, J. 2003, The Exposure of Youth to Unwanted Sexual Material on the Internet" A National Survey of Risk, Impact, and Prevention, Youth & Society, Vol.34 No.3, March 2003

Net Family News 2003, Online kids' exposure to porn: 2 studies in 2 countries, 7 March 2003

Osman, R. 2003, ISP censor vocal from sidelines, Australian Financial Review, 11 March 2003

Senate Select Committee on Information Technologies 1999, Hansard Transcript of Public Hearing, Inquiry into the Broadcasting Services Amendment (Online Services) Bill 1999, 3 May 1999