Digital Rights recognize that parents have the rights to guide their children when accessing the Internet, and that filtering technologies can be one means to do so. At the same time however, Digital Rights, fear that widespread use of filtering technologies has a serious risk of developing into de-facto censorship. Digital Rights find that development and promotion of filtering technologies should be considered carefully on the basis of a good understanding of the nature of the Internet and with a strong view on the possible effects on society at a whole.
At the same time, however, the effective and unfiltered communication offered by Internet raises concerns. By the nature of the Internet, information intended for a small audience is automatically available for all users on the net - including users that could be offended by the information and even - if the user is a child - could be harmed. The existence of very extreme information on the Internet such as hard core pornography, explicit violence and hate speech has accentuated this problem and given raise to demands for tools to allow the user to "protect himself" and parents to protect their children against offensive and harmful information on the net.
Against this background there is a growing support of new filtering technologies that provide means to block access to information on the Internet based on content rating of websites (by the information provider or a third party), blacklisting of web-sites or computerized content analysis. In response to what is perceived as a "strong concerns" among Internet users about harmful content, governments, international organizations and industry groups are working actively to develop and promote these technologies.
Filtering technologies, however, are not without problems. Widespread use of filtering technologies can seriously damage the newly gained benefits for freedom of expression and access to information obtained via the Internet. The dissimilation of filtering technologies in our society should be considered thoroughly based on a clear understanding of how filtering technologies work and in light of the way information is accessed on the Internet.
First of all it is important to realize that the information targeted by the various filtering technologies - despite itís in some cases very extreme nature - is legal information. Filtering technologies are not designed to block access to illegal information on the Internet, such as child pornography. This lies in the very nature of filtering: In order to block access to information you need to identify the information - the web site - that you wish to block. Clearly the response to a web site with illegal information should not be to include it in a list of sites that are blocked in a commercial software product. The response should be to report the site to a law enforcement agency. And clearly supplies of child pornography can not expected to rate their content as such before publishing it on the Internet.
Secondly, it essential to stress that there is no need for tools that allow the (adult) user to protect him against unwanted information on the Internet. The Internet itself offers this protection. In contrast to television, radio and outdoor communication information is not "thrown in the face of the user" on the Internet. The user is actively seeking the information on the net by following links, using the search engines and catalogs and writing web addresses in the browser window. Therefore filtering technologies - despite the rhetoric of their supporters - are not designed to allow users to "protect themselves" against harmful information. Nor are they designed to allow users to protect others against information that they "accidentally" encounter on the net. They are designed to allow some users to prevent other users of gaining access to information that they are actively trying to access.
Combined with the fact that the information blocked by filtering systems is legal information this clearly implies that the use of filtering technologies should be kept to a minimum in a democratic society.
Unfortunately this will not necessarily be the case.
Although the main argument for developing and promoting filtering technologies at present is that parents need a tool to protect their children against harmful information on the Internet, these technologies are already being applied in contexts where not only children are affected by the filtering mechanisms and even in cases where children are not involved at all:
- Libraries apply filtering technologies in response to moral criticism and in order to protect children accessing the Internet from the library.
- Public Internet access points (Internet cafes, Hotels, Airports...) apply filtering technologies for moral reasons and in order to limit bandwidth use.
- Companies apply filtering systems in order to prevent employees to waste time surfing for pornography during work hours.
- Schools apply filtering technologies in order to protect small children from harmful content and in response of moral criticism from parents.
- Local communities (apartment blocks, small villages) that share the same Internet connection apply filtering technology for moral reasons and to protect children.
The use of filtering technologies in each of these cases may be difficult to criticize - who would stand up in the local community to argue that the access to hard-core pornography should not be blocked? But if filtering technologies gain widespread use and are used in virtually all contexts where there is public access to the Internet the effect for society as a whole could easily be that access to unfiltered Internet will be very scarce. In effect we will have laid down a de facto censoring scheme on the Internet.
The negative effect of this is enhanced by the fact that filtering systems due to difficulties in rating content in many cases block access to content that is not originally intended. A study of the categories of the most widespread ratings schemes shows that a lot of useful information such as educations information on sexuality and sexual transmitted diseases, art containing nudity, description of religious rituals, news spots etc. is very likely to be rated as highly offensive/harmful and thereby being blocked.
Furthermore the rating schemes (in order to be kept simple) will need to be based on the views and moral values of the broad public. Exactly the same minorities that has benefited the most from the possibility to get contact and communicating with likeminded via the Internet are therefore likely to be the first victims of a widespread use of filtering technologies on the net.
Digital Rights recognize the right of parents to guide their children when accessing the Internet and when necessary to control what information the children access. And Digital Rights acknowledges that filtering technologies can be one means to do so. At the same time, however, Digital Rights fear that widespread use of filtering technologies has a serious risk of developing into de-facto censorship based on the moral values of the broad public.
Digital Rights therefore finds that development and promotion filtering technologies should be considered carefully on the basis of a good understanding of the nature of the Internet and with a strong view on the possible effects on society at a whole.