Roger Clarke's Web-Site

© Xamax Consultancy Pty Ltd,  1995-2024
Photo of Roger Clarke

Roger Clarke's 'Cyber-Criminalisation'

Safeguards against Unpleasant Cyberspace Behaviour:
Targets not Victims, and Self-Help before Criminalisation

Preliminary Draft of 2 February 2016

Roger Clarke **

© Xamax Consultancy Pty Ltd, 2015-16

Available under an AEShareNet Free
for Education licence or a Creative Commons 'Some
Rights Reserved' licence.

This document is at http://www.rogerclarke.com/II/ECC.html


Abstract

With each new form of electronic interaction, dysfunctional cyberspace behaviour emerges milliseconds after the service itself. Behaviour whose impact is sufficiently harmful needs to be subject to the criminal law. Although longstanding laws sometimes do not cope with technological change , and law reform may involve delays, community disquiet and activism usually ensure that adaptation occurs.

The highly-charged atmosphere that surrounds some kinds of behaviour, on the other hand, brings with it a significant risk of over-reaction. Activities are given the 'cyber-criminality' tag that are at worst little more than unpleasant, and even barely harmful at all. Phenomena that have given rise to moral panics and the risk of dysfunctional criminal law include sexual harassment, hate speech, sexting, revenge porn and cyber-bullying.

This chapter argues against the cult of victimhood and the tendency to criminalise cyberspace behaviour, and in favour of greater use of social controls and of self-protective cyber-tools. Individuals under threat need to apply technical safeguards in the form of privacy-enhancing technologies (PETs). These enable individuals to obfuscate and falsify their data, their messages, their identities, their locations, and their social networks, in order to deal with such phenomena as trolling, online insults and cyber-bullying.


Contents


1. Introduction

This chapter questions the appropriateness of the conceptualisation of the problem as 'Cybercrime and Its Victims'. The first issue is the presumption that a wide range of categories of unpleasant acts should at least have the tag 'cybercrime' informally applied to them, and perhaps should even be subject to criminal sanctions. In many cases, this is a highly ineffective approach to the problem. It is also seriously counterproductive; and it has negative impacts on other values. Alternative approaches are available, and to be preferred.

The second issue is the submissiveness of the notion of 'victim'. The circumstances under discussion involve aggressive behaviour directed variously at individual and group targets. There has been a strong tendency in recent decades to characterise the 'target' as a 'victim'. This sometimes goes so far as to glorify victimhood, in a similar way that people killed during violent incidents are elevated by the tag 'martyr'. Imposing victimhood has the effect of convincing the targeted individuals and groups that they are so weak that they are defenceless, and that therefore resistance is futile and protections can only be provided through outside intervention. This contention in this chapter is that the focus on victimhood is tantamount to contributory negligence, and that instead the emphasis should be on various forms of individual and group self-help.

The chapter begins by examining the categories of unpleasant behaviour, and the appropriate role of criminal law. Consideration is then given to victimhood and to alternative, much more positive approaches to dealing with the problems. The author can contribute little to the important behavioural aspects of self-sufficiency and psychological and social self-empowerment. The final section instead outlines ways in which information technology enables affected individuals and groups to implement safeguards for themselves.


2. Unpleasantness in Cyberspace

Contributions to understanding behaviour in cyberspace have been made from the earliest days of public access to the Internet. The first major work was Rheingold (1993). The author of this chapter published a catalogue of 'dysfunctional human behaviour' (Clarke 1995), and means for understanding cyberculture and achieving reasonable levels of control over cyber-behaviour (Clarke 1997a, 1997b). Two decades later, the list of dysfunctional behaviours has lengthened considerably, but much less progress has been made in designing social control mechanisms into Internet services. In Appendix 1, a working set is proposed of what are referred to collectively in this chapter as 'unpleasant cyberspace behaviour'. The focus is not on commercial abuses such as spam, scams, spear-fishing and 'drive-by downloads', but on anti-social behaviour patterns.

Many cyberspace behaviours deviate from the social norms of some group, or are 'deviant' from the moral perspective of some group, or have unpleasant impacts on some person or people. Where a person exhibits such behaviours aggressively and/or persistently, they may be seen by many people to be psychopathic or sociopathic in nature. Such behaviours are subject to countervailing behaviours and social controls. Relatively soft forms of control include unpleasant countermeasures, vigilantism and policemen stepping in to 'maintain the peace'.

Behind the softer forms of social control is the exercise of legal authority by individuals empowered to do so by the State. A crime is an act that has been predetermined by a parliament to be sufficiently harmful to individuals, communities, the economy or the State that it is subject to investigation and prosecution by law enforcement agencies and to significant sanctions by the judiciary, commonly including deprivation of liberty. Crimes generally involve both an act or omission (actus rea) and an intention that the action have the kinds of negative results that justify the action being criminalised (mens rea). However, there are qualifications and exceptions to those norms.

2.1 Crimes, 'Computer Crimes' and 'Cybercrimes'

In the networked / virtual world (or the 'shared hallucination' that is cyberspace), as in the real world (or 'meatspace'), there is, among the wide array of forms of unpleasantness, a sub-set that either is already criminalised or is readily argued to have characteristics that bring it within the conventional definition of criminal behaviour.

Examples of cyberspace activities that are subject to provisions of the criminal law are preparations to commit serious crime; incitement to violence; and the distribution of categories of pornography that are illegal in the relevant jurisdiction(s). During the period c. 1980-2000, as networked computers became widespread, a small set of 'computer crimes' were added to the statute books, relating to unauthorised access to computers and to data, and impairment of them. The (Budapest) Convention on Cybercrime (CoE 2001) defined offences in the areas of data access, message interception, data interference, system interference, misuse of devices, computer-related forgery, computer-related fraud and child pornography. The Convention has been a driver of national laws in this area. In Australia, for example, Part 10.7 of the Criminal Code (Cth) at ss. 476-478.4 specifies 'computer offences', and Part 10.6 ss. 473.1-475.2 specifies a wide range of offences relating to telecommunications services. These together contain about 15,000 words - which every member of the public is deemed to know of, and to understand.

During the period since c. 1995, as Internet accessibility and services developed and became mainstream, various laws have been extended and refined to cope with the changes in social patterns. The notion of 'cyber-crime' - a term that is subject to widely varying interpretations - is usefully descriptive of this later generation of acts subject to criminal sanctions.

Grooming of children for sex-related purposes was a criminal offence for decades before computers were invented. During the Internet era, many jurisdictions have amended or re-created laws to address on-line grooming. Other examples include what is commonly referred to as harassment, but in some jurisdictions called intimidation, and stalking. The criminal law has been amended to ensure that actions performed partly or wholly in virtual rather than physical spaces fall within the scope of such existing or amended offences. With most such crimes, a relatively high bar is set, such that, for example, heated arguments do not constitute harassment, and minor or few nuisance messages are not treated as cyber-stalking.

The later section on technological safeguards is relevant to individuals and groups who are the targets of the categories of criminal behaviour discussed here. The chapter's primary focus, however, is on the many other forms of unpleasant cyberspace behaviour, many of which are not crimes, and which this author contends should not be crimes.

2.2 Inappropriate Cyber-Criminalisation

Decisions about which activities are 'sufficiently harmful' that they justify criminalisation are the responsibility of parliaments, and jurisdictions evidence both considerable variation and ongoing change. An all-too-common pattern commences with individual cases attracting media attention and being amplified by a shrill chorus from outraged citizens. Governments rush knee-jerk legislation into the House. Politicians on all sides are cowed by public sentiment and the Bill is passed without meaningful consideration of its provisions. Blunders become apparent, sometimes quickly, but sometimes only after one or more individuals have been harshly dealt with for behaviour that was not within the Bill's intended scope. However, by then the surge of community interest has subsided, and the blunders remain unamended. The laws are perceived by law enforcement agencies to be unworkable and enforcement is sporadic and piecemeal, the law falls into disrepute, and the underlying problem remains unaddressed.

Criminalisation of human activities requires justification. That in turn depends on evaluation criteria, information and calm assessment. This section suggests some criteria, and then considers some current examples of excessive enthusiasm for criminalisation which have both inadequate positive impacts and serious negative consequences.

A first requirement is that a proposal for new or amended criminal law must be demonstrably effective in achieving the intended objectives. Frequently the behaviour that is to be subject to sanctions is described in unclear terms, with little guidance as to how behaviour is to be judged to be within-scope and outside-scope of the offence provisions. Terms such as harassment and stalking, for example, are sometimes applied where there is insufficient impact to satisfy the 'sufficiently harmful' test of a criminal offence.

Another requirement is that a proposal must avoid being counterproductive. A common example of over-reach is a law that is framed in such a way that categories of people perceive themselves to be discriminated against, and hence actively challenge the law's moral legitimacy.

A third requirement is that the establishment of a criminal offence not have undue negative impacts on other values. One of the side-effects of undue cyber-criminalisation is that law enforcement agencies' investigative powers become widely usable rather than being restricted to serious crime. There have been many attempts by 'national security' and law enforcement agencies to gain very substantial powers to interfere with information infrastructure, with electronic traffic, and with stored data. Those powers represent serious threats to society and the polity generally, but also more specifically to, for example, persons at physical risk (GFW 2011), and persons at intellectual risk, including artists of all kinds, but also 'deviant' thinkers in business, technology and social policy, whisteblowers, and dissidents.

2.3 Case Studies in Inappropriate Cyber-Criminalisation

A variety of instances exist in which there are already, or may soon be, excessive provisions in relation to unpleasant cyber-behaviour.

(1) Sexual Harassment

Finding appropriate boundaries in relation to propositions and sexually suggestive behaviour is challenging. In Australia, sexual harassment has been an offence since the mid-1980s, and the expression of the law is such that it automatically applies to cyberspace behaviour. The threshold at which behaviour is defined to be sufficient to constitute an offence is extraordinarily low, however, comprising simply "an unwelcome sexual advance" or "other unwelcome conduct of a sexual nature", including "making a statement of a sexual nature to a person" (Sex Discrimination Act s.28). There are some saving clauses, in particular "in circumstances in which a reasonable person, having regard to all the circumstances, would have anticipated the possibility that the person harassed would be offended, humiliated or intimidated". But the soft word 'possibility' greatly circumscribes the protection for ill-judged attempts at flirting and for overly hopeful or vocally clumsy Don Juans.

The offence is civil only, and the sanctions are limited to monetary compensation. Nonetheless, parliaments that create such low thresholds for offences create many small problems in the process of addressing the smaller number of large ones. The Commissioner charged with handling complaints has been forced to develop a jurisprudence that filters out the large volume of trivial matters that fall well short of the 'sufficiently harmful' criterion. Alternative approaches are available, such as the constructive responses to sexual harassment documented in Waerner (2000).

(2) 'Hate Speech'

In the first case, the acts are not formally criminalised, but are civil offences prosecuted by a government agency. Under the Australian Racial Discrimination Act (Cth) s.18C, it is unlawful, "otherwise than in private", to "offend, insult, humiliate or intimidate ... because of ... race, colour or national or ethnic origin". This is, however, subject to broad saving provisions in s.18D, which spare artistic works, "any genuine purpose in the public interest", fair and accurate reporting of any event or matter of public interest, and 'fair comment expressing a genuine belief'.

Some applications of the section have caused considerable controversy. There is little doubt that 'vilification' is appropriate to criminalise, provided that it means something like 'making statements that have a reasonable likelihood of inciting hatred and hence motivating violence'; but incitement to violence is already a criminal act called 'urging violence', under ss. 80.2A, B of the Criminal Code (Cth). Behaviour that "offends", "insults" or "humiliates" is unpleasant behaviour of a kind that needs to be subject to social controls, not criminal nor even civil offences. The focus of law reform should be on private law, which in common law jurisdictions takes the form of a tort.

(3) Sexting

Some years ago, grown-ups discovered that young people's uses of handhelds extended beyond sexually suggestive text and included sending one another images of themselves in various states of undress. Moral panic ensued. An early review is in Svantesson (2010). As a result, some legislatures quickly passed ill-considered legislation making such behaviour a criminal offence. In addition, many laws relating to child pornography were so broad that they encompassed this form of behaviour.

Prosecutions have followed, under both excessive new sexting laws and intentionally draconian child pornography laws. This has resulted in criminal records for young people who have been bewildered by the strange attitudes taken by adults, somewhat cowed by the legal processes involved, and hence even more likely to reject the social institutions that adults were imposing on them. In some jurisdictions, at least some of the young people found guilty of such behaviour are entered on a Sexual Offenders Register, which associates them with acts regarded during the period as being among the most despicable crimes, imposes reporting obligations on them for long periods, and ensures long-term visibility of their identities, addresses and convictions. It is reprehensible that States are imposing such measures on young people for mainstream behaviour that in itself results in little or no harm to any party.

In Victoria, amendments were necessary in 2014, to ease back the seriously excessive application of child pornography laws. Some laws passed by US State legislatures have stipulated a lower level of offence for young people, in some cases in the form of a misdemeanor rather than a felony (McLaughlin 2010). This misses the point that such behaviour should not have been criminalised at all, and should instead be treated as a social issue, to be managed by means of social controls.

The consequences of the extraordinary over-reaction by irresponsible politicians to experimental behaviour among adolescents has been, and in some countries continues to be, far more harmful to individuals, communities and the State than the behaviour it was intended to punish. The protection of children from sexual predators is appropriately achieved through carefully framed laws that regulate pornography and that include strong criminal sanctions against the publication to adults of depictions of under-age people, supplemented by child abuse, assault and grooming laws. The protection of children from themselves, on the other hand, needs to be addressed through a variety of social not legal controls.

(4) 'Revenge Porn'

A similar risk of heavy-handed legislation exists in the area of 'revenge porn', in which, most commonly, a person disseminates images of an ex-partner. In Victoria, for example, an offence of 'Distribution of an intimate image' was created in 2014, punishable by 2 years' imprisonment. The two features of the offence are the intentional dissemination of "an intimate image" of another person, "contrary to community standards of acceptable conduct". The vague and open-ended nature of the expression is problematic enough. In the UK, the Criminal Justice and Courts Act 2015 ss. 33-35 created an offence of 'disclosing private sexual photographs and films with intent to cause distress', with a maximum sanction of 2 years' imprisonment. The sole exemptions are for criminal investigation, journalism in the public interest, and previous disclosure for reward. The parliament failed to appreciate that a well-designed privacy tort would have been sufficient to address this category of unpleasant cyberspace behaviour.

The mistakes made in Victoria and the UK appear to have inspired the Australian federal Labor Opposition, which has proposed draconian legislation, with some extremely poorly thought through exemptions, for "law enforcement officials, journalists who can prove [??] that distributing the material is of public benefit, people who make pornography specifically for distribution [??], or the creation and dissemination of anatomical imagery intended for medical training and use".

The even larger issue is that very few instances of this kind of unpleasant behaviour is 'sufficiently harmful' to warrant the intervention of the State and the deflection of law enforcement resources away from more serious matters. It is clear that social controls are not enough to deal with such circumstances. But the appropriate approach is through private or civil law. This adds to the long list of reasons why legislatures should have long ago created a privacy right of action. In common law jurisdictions, this would be a tort, whereby a person who has suffered harm of such kinds can seek redress from the miscreant (Collingwood & Broadbent 2015). Manitoba's Intimate Image Protection Act ss. 11-16 came into force in January 2016, creating a very specific privacy tort concerning the non-consensual distribution of intimate images (Clarke PA 2015).

(5) 'Cyber-Bullying'

Despite the complexity and variety of social behaviours and misbehaviours, many discussions are hemmed in by simple slogans. The term 'bullying' is used in varying contexts but particularly in institutionalised settings such as schools and workplaces where the target has difficulties avoiding the attacker. Generally, bullying involves aggression of a verbal or emotional, and perhaps of a physical or sexual, nature. To qualify as bullying, the behaviour has to be repetitive rather than isolated, and frequent rather than occasional. The bully may have greater physical, social or institutional power than the target, and may use that as a weapon. The target may suffer harm of a psychological, and perhaps physical nature, and discussions are commonly framed in such a way that targets of bullying are 'victims' and are expected to suffer.

The meatspace phenomenon naturally migrated into the electronic realm, taking on varying forms. One advocacy group defines 'cyber-bullying' as "when a child, preteen or teen is tormented, threatened, harassed, humiliated, embarrassed or otherwise targeted by another child, preteen or teen using the Internet, interactive and digital technologies or mobile phones". The definition suffers from an excess of emotion, and inverts the phenomenon by building in victimhood. Stripping those away, cyber-bullying can be usefully defined as 'the use of interactive technologies to target a person in such a manner as to cause them psychological stress'.

All US States passed legislation in relation to bullying during the first 15 years of the new century, and many of them expressly included mention of its cyberspace forms. These generally require schools to establish policies to address the problem, with an emphasis on social controls. Some statutes have even been accompanied by research, and the development and publication of information and resources. A positive aspect is that few of these laws create offences, and instead the serious cases are dealt with through existing laws, in particular assault, harassment and stalking. In Australia, use has occasionally been made of the 'cybercrimes' of 'using a carriage service to make a threat' (s.474.15 of the Criminal Code (Cth)) and 'using a carriage service to menace, harass or cause offence' (s.474.17). In some cases, e.g. where there is a racial or sexual orientation element, anti-discrimination laws may be relevant. Civil law remedies may also be available in some circumstances (Ong 2015).

In Australia, all of a resource-site, the the Human Rights Commission Fact Sheet, and the recently-created Office of the Children's eSafety Commissioner focus on social controls and self-help approaches. This is therefore an area in which the inappropriateness of cyber-criminalisation has been widely recognised.


3. Victimhood and Its Alternatives

There was an explosion in the prevalence of victimhood as a way of life during the later decades of the 20th century (Fassin & Rechtman 2009). Alternative depictions of the phenomenon have used such terms as 'the rights industry' and 'the age of entitlement'. The common feature of these notions that is relevant to the present discussion is the aura of powerlessness that pervades the use of the word 'victim', and the concomitant expectation that the individual will surrender, and will be dependent on external agents to address the problem on their behalf, and achieve redress or restitution.

In cyberspace, there is a great deal of scope to fashion constructive responses to unpleasant behaviour rather than submit to the presumed overweaning power of the attacker. It is important that advantage be taken of these opportunities, for two reasons. One is that the responses can be made promptly by the targeted individual or group, rather than having to wait for and depend upon the acquisition and mobilisation of resources from elsewhere. The other benefit is that the individual or group is active rather than passive, and focussed on solutions rather than internalising the problem and blaming themselves.

The proposal that targets of unpleasant cyberspace behaviour should invest energy in helping themselves is, of course, not intended to exclude external activities aimed at the problem as a whole. Society, through its institutions, needs to provide both rational support (e.g. information, checklists and contact-points) and emotional support (e.g. readay access to verbal advice and counselling). Legislatures need to ensure that the more serious forms of attack are subject to rights of action under private law, and that attacks that satisfy the 'sufficiently harmful' criterion are within-scope of the relevant criminal offences. Governments need to ensure adequate resourcing of law enforcement agencies, and recognition by them of the need for enforcement actions.

At the level of groups or electronic communities, a wide range of mechanisms exist whereby at least some degree of control can be achieved over behaviours. Terms used for such approaches include 'self-regulation', 'self-help', 'social control' and 'social enforcement'. An accessible legal analysis is in Gibbons (1997). These approaches use a combination of social processes and technical features. For example, gatekeepers may regulate access by individuals to the forum, and may moderate postings based on content analysis.

Critically, however, individuals need to take action to deal with attacks that target them. A frame is needed within which individuals can avoid the victimhood mentality, and establish organic self-help as the primary mode. One possibility it to apply the risk management notions used in the worlds of business and technology. This distinguishes non-reactive strategies (which are not relevant here) from reactive and proactive strategies.

Reactive measures are implemented after an event. In the context of behaviour such as intentional misinformation, offensive speech or 'cyber-bullying', an appropriate reactive measure might be calm responses to the attack, in the same venue in which it was launched and/or elsewhere. A variant, admittedly with risks associated with it, is to request that participants in the venue avoid making responses, in order to 'deny oxygen' to the attacker. In the case of revenge porn, another possible reactive measure is a request for takedown.

Proactive measures, on the other hand, are taken in advance, in anticipation of unpleasant behaviour, or to counter likely subsequent steps in an ongoing attack. Some proactive strategies may be more appropriately taken at the level of a community rather than by an individual who anticipates being the target of an attack. Deterrence measures include terms of use of interactive services, documented community norms, and reminders about them. Prevention measures include interception and moderation of postings (from repeat-offenders, or even from all participants), black-listing of banned participants, and white-listing of pre-approved participants. These of course have chilling effects on human communication generally, and they may not always be tenable.

Individuals can apply a prevention strategy (although perhaps only against the later phases of an attack) by constructing filters in order to deflect messages containing specific identifiers or keywords into separate mailboxes. This enables those messages to be checked only when the individual is ready to see them. Filters of that kind can also be instituted as an avoidance measure, if the contents of the mailbox are ignored, or the messages deleted when they are detected. Other avoidance approaches available to an individual are to simply not respond, to absent themselves from the venue for a time, or to leave the venue, or to give the appearance of having left it. The generic concept of 'insurance' can also be applied, e.g. by communicating privately with influential figures within the particular community, in order to prepare the ground for reactive measures in the event that an attack occurs. Measures of course need to be sculpted to the content, including the particular form of unpleasant behaviour. For example, Waerner (2000) lists eight specific 'empowered action' tactics for dealing with sexual harassment.

It is not being suggested that such measures necessarily achieve prevention, nor that reactive measures will necessarily result in the attacker backing off due to the target's responses or community opprobrium directed at them. What self-help measures do achieve, however, is communicate robustness and resilience on the party of the target, and create a climate in which at least a stand-off may be able to be achieved, if not cessation, retraction and apology.

The argument pursued in this section may sound somewhat libertarian in nature; but it differs from libertarianism in two important respects. It does not assume a utopian context and outcomes. It also does not propose self-help as a substitute for formal regulatory arrangements. Rather it is a complement, with behaviours that are 'sufficiently harmful' being subject to the criminal law, and behaviours that do not reach that threshold relying on social controls, bolstered by private law in such forms as defamation and privacy torts.

This author does not have the expertise necessary to pursue the psychological and social aspects of a self-help agenda to any great depth. The following section presents an examination of the other element of a self-help approach - the use of technological tools by individuals and groups as a means of defending themselves against unpleasant cyberspace behaviour.


4. Technical Measures

Information technologies generally, and Internet protocols in particular, were designed to enable communications, and to exhibit a degree of robustness and resilience in the face of physical threats like lightning and bulldozers. However they were not designed to cope with active attacks by intelligent, informed and resourceful human beings, and they harbour a vast array of exploitable vulnerabilities (Clarke & Maurushat 2007). There are few inherent safeguards for any values, and few controls are available over devices, their users, or traffic between them. Almost all safeguards and controls are afterthoughts, retro-fitted to existing technologies, and subject to circumvention and subversion countermeasures. IT providers have not only failed to provide reasonable levels of security, but have actively designed insecurity into devices in order to assist marketers in particular, but also law enforcement agencies (Clarke 2015).

There is, however, a range of technical means that, however imperfect, can assist individuals and groups that are the target of unpleasant cyberspace behaviour to achieve some degree of protection. This section commences by outlining relevant principles. It then provides an overview of the various categories of tools, followed by examples of uses of these tools by attackers and other miscreants. It concludes with a discussion of ways to apply the tools to the needs of targets of particular categories of unpleasant cyberspace behaviour.

4.1 Principles

There are circumstances in which the bad and the good alike have a need to obfuscate and to falsify. The term 'obfuscation' encompasses a variety of means for hiding, obscuring, making vague, or aggregating with others of a similar kind into a composite. Similarly, 'falsification' covers various forms of active misrepesentation in order to avoid, deter or even prevent exposure to an attacker, or to mitigate the harm arising from an attack. Sometimes it is data that needs to be hidden or obscured. In other circumstances, it may be the person's identity, or their location, whose exposure would result in harm. Table 1 identifies the various entities that need to be protected.

Table 1: Categories of Obfuscation and Falsification

After Clarke (2016) Appendix 6

These abstract Principles have application in a wide variety of circumstances, and their application may depend on the use of particular features of software and hardware tools. The following sub-sections provide some examples of relevance to the targets of unpleasant cyberspace behaviour.

4.2 Privacy-Enhancing Technologies (PETs)

Since the mid-1990s, the conventional term for tools that are designed to protect privacy has been 'privacy-enhancing technologies' (PETs). See IPRC (1995) and Clarke (2001). Table 2 provides examples of the wide array of PETs. Many people are likely to be at least vaguely aware of some of these examples, whereas others may be entirely unfamiliar. Guidance in relation to specific tools is provided by Kissell (2014), EFF (2016), EPIC (2016), BestVPN (2016) and PrismBreak (2016).

Table 2: Examples of PETs

After (Clarke 2014a)

Device-Related Tools 
• Operating Systems stored on portable media, such as a USB StickWeb-Related Tools
• Anti-Malware Tools• Browser Security and Privacy Settings
• Storage Device Encryption• Browser Add-Ons, to block scripts, ads, cookies, etc., such as

Ghostery

Storage-Device Erasure• Internet History Cleaners, to delete cookies, caches, etc.
• Password Vaults to securely store multiple passwords• Search Engines that support encrypted channels and anonymity,

such as DuckDuckGo and ixquick

• Digital Persona management tools • Privacy-Friendly Social Networking, such as Diaspora
Content-Related ToolsEmail-Related Tools
• File-Encryption• Secure Email Service-Providers
• File Erasure• Email Encryption
• Steganography to obscure content• Message Filtering
Traffic-Related Tools• Sender Blacklisting and Whitelisting
• Encrypted Channels, e.g. using TLS / https• Management of multiple email accounts
• Content Filtering• Anonymous Remailers
• Sender Blacklisting and WhitelistingOther Communications-Related Tools
• Authentication of Remote Devices and Users• Secure 'Dead-Drop' Boxes, such as Strongbox and Wikileaks
• Firewalls• Secure Instant Messaging / Chat
• Intrusion Detection Tools• Secure VoIP/Video Messaging
• Internet Anonymizers and Proxy Servers• Temporary Mobile Phones
• Virtual Private Networks (VPNs)• Mobile Privacy Tools
• Traffic Obfuscation Networks, such as ToROther Commerce-Related Tools
• Peer-to-Peer (P2P) Networks overlaid over the open Internet• Anonymous Digital Currencies
• Mesh Networks, avoiding telecommunications backbones 

The need for PETs arose because so many applications of computing and communications by corporations and governments were, and continue to be, reckless with people's privacy, or actively threatening to it. The term 'privacy-invasive technologies' (the PITs) is suitably descriptive of a great deal of software. A further consideration is that the obfuscation of identity is of particular significance. This leads to the classification of PETs outlined in Table 3.

Table 3: Categories of PETs

After (Clarke 2001)

The effectiveness of pseudonymity tools is entirely dependent on a combination of organisational, legal and technical features. Unfortunately, parliaments have failed to provide a trustworthy framework. Corporations actively oppose trustworthy pseudonymity including by contriving privacy-invasive protocols and service features, by claiming the right to 'provision' identities, and by imposing 'real name' policies. Meanwhile, the widespread cavalier and sometimes simply illegal behaviour of national security agencies, and the cowardice shown by legislatures and by many courts when asked to rein in agencies' misbehaviour, has greatly increased the level of public cynicism about government institutions. As a result, Gentle PETs lack credibility, Savage PETs remain the mainstream, and accountability is undermined.

4.3 Applications

Discussion about the use of the kinds of tools listed in Table 2 and categorised in Table 3 often centres on their use by miscreants, ranging from terrorists and drug cartels, through common criminals, to public nuisances. And, of course, like anything that's useful (the roads, public transport, electricity, air), it's entirely true that these tools are valued by crooks. That applies particularly to 'savage PETs' - hence the name.

During recent years, there have been media flurries about such notions as 'the dark Net' and 'the dark Web'. These terms originally had sensible technical meanings. The 'dark Net' can refer to parts of the Internet that were intended only for a restricted group of users. Many Private Networks exist that carry traffic only between members of a club, such as the networks that support Automated Teller Machines and EFTPOS terminals. It is also possible to create 'Virtual' Private Networks (VPNs) that use public Internet connections, but in such a manner that only club-members have access to the traffic. The 'dark Web', sometimes 'deep Web', originally referred to data that was stored in databases and hence was not discoverable by search-engines. In both cases, the darkness existed merely because the technologies were not designed to enable access, so, metaphorically speaking, the sun didn't shine there. Recent media usage, however, has associated 'darkness' with 'evil'. This was exacerbated by a populist book, Bartlett (2014), which dramaticised the topic and misinformed its readers in a self-proclaimed "revelatory examination of the internet [sic] today, and of its most innovative and dangerous subcultures: trolls and pornographers, drug dealers and hackers, political extremists and computer scientists [sic], Bitcoin programmers and self-harmers, libertarians and vigilantes".

There are of course 'dark as in evil' uses, of the Internet, of the Web, and more particularly of many others among the c. 100 kinds of services that run over the Internet; just as there are 'dark as in evil' uses of cars, pens and telephones. Contrary to misleading media reports, much of the traffic in illegal forms of pornography does not use the http protocol that underlies the Web, but more commonly e-mail, instant messaging, bulletin boards, chat rooms, news groups and P2P networks (Wortley & Smallbone 2006). The Silk Road market, which was used primarily for illegal drugs, relied on several technologies (the Tor traffic obfuscation network and Bitcoin, but also the Web) each of which has a great many other, more constructive uses. The several large-scale, international operations to break up child porn rings, and the closure of Silk Road and the identification and prosecution of its principal, attest to the fact that there are many circumstances in which well-informed, well-resourced and well-coordinated investigations can break the anonymity usually provided by 'savage PETs'.

People exhibiting the lesser forms of unpleasant cyberspace behaviour can also use these tools. In order to provide some concrete examples of both negative and positive uses of technological measures, a cluster of behaviours can be considered which have a considerable amount in common. Various forms of online 'attack speech' involve at least one communication, but commonly a succession of them, by an attacker to a target and/or to a group, which may or may not include the target. Examples include intentional misinformation, trolling (messages designed to provoke emotions and cause disruption), flaming, offensive communications, insults, content intended to humiliate the target, and cyber-bullying. An attacker might utilise multiple digital personae, anonymous email proxies and traffic obfuscation, in order to distribute successive messages to a target or a group, even if the target or the group inserts filters with the intention of blocking messages with particular characteristics thought to be associated with the attacks.

These same tools, however, are also vital for cyberspace behaviours that are widely regarded as being positive. For example, whistleblowers take personal risks in order to disclose information that identifies corruption, hypocrisy and lies perpetrated by governments and corporations and their staff-members. The importance of multiple digital personae and nymity to whistleblowers and dissidents was investigated in Clarke (2008b). In Lee (2013), detailed guidance is provided on how whistleblowers can protect themselves. Clarke (2014a) pursued Lee's analysis further, inferring a risk assessment, and suggesting appropriate PETs to address those risks.

4.4 Application by Targets of Unpleasant Cyberspace Behaviour

Risk assessment techniques are valuable not only to whistleblowers and political dissidents, but can also be conducted by, or for, the targets of particular categories of unpleasant cyberspace behaviour.

The process of risk assessment involves identification of the values that are under assault, and the kinds of harm that can be done to them. This lays the foundation for analysis of the specific threats that the attacker poses, and the vulnerabilities - technical, but also psychological, social and economic - that the threats impinge upon. With that understanding, it becomes feasible to devise or select safeguards that will prevent or at least mitigate the harm. The category of 'attack speech' was outlined above, from the viewpoint of the attacker. See Appendix 1 for activities that are suggested as falling within the term's scope. Table 4 offers an example of how risk assessment can enable an understanding to be developed of how targets of 'attack speech' can utilise PETs to deal with the problem.

Table 4: Indicative Risk Assessment for 'Attack Speech'

Risk Assessment Step Specification
1. Define the Objectives and Constraints Protect an individual target against Harms arising from 'Attack Speech'.
2. Identify the relevant Stakeholders The individual.
A party in loco parentis.
3. Identify the relevant Assets The individual's physical self.
The individual's psychic self.
4. Identify the relevant Values The individual's physical safety.
The individual's emotional state.
5. Identify the relevant Categories of Harm Death or injury.
Anxiety.
Failure to fulfil guardianship responsibilities.
6. Analyse Threats and Vulnerabilities The fact of an attack may cause upset.
The content of an attack may cause upset.
Availability to the attacker of the individual's location in physical space may result in physical confrontation, which could escalate into violence.
7. Identify existing Safeguards The messaging system may by default not disclose the location of sender and recipient.
The target may be sufficiently cautious to avoid disclosing their location in content that they post.
8. Identify and Prioritise the Residual Risks The target may lack the presence of mind to avoid seeing content likely to be upsetting.
The messaging system defaults may be over-ridden.
The target may not understand how their location may be disclosed in content.

Based on the assessment of risk, a risk management plan can be developed and implemented. An indicative set of the measures that might be selected on the basis of such a risk assessment is in Table 5.

Table 5: Indicative List of PETs Relevant to 'Attack Speech'

4.5 Prospects

To what extent are the kinds of features and tools described above readily available, even to targets of 'speech attacks', let alone of more aggressive forms of unpleasant behaviour? Many such features and tools do in fact exist. However, there are major shortfalls in a number of areas. Firstly, many providers of mainstream products fail to embody the necessary features, or make them obscure. In some cases, this is because the provider fails to appreciate the features' importance. In other cases, however, the provider perceives the feature to work against the provider's interests, e.g. by reducing the amount of open traffic, and hence reducing the individual's contribution to the common pool of activity and content on which the traffic-volume and revenue of the provider depend.

Aother important shortfall is the lack of understanding by individual users, by others whose opinions the individuals respect (e.g. thought leaders, tech-savvy peers), by parties in loco parentis - primarily parents and teachers, by service operators such as schools and ISPs, and even by advocacy groups working on behalf of the categories of individuals who become targets. Even where the understanding exists among some of these participants, it may not be projected through education, training, documentation including FAQs and less formal user-group channels, and hence individuals who become targets may not have an easy reference-point from which they can gain advice.

A great deal of the problem lies with the exploitative nature of the business models used by providers of mainstream products - such as web-browsers, email and chat tools) - and services - particularly Google and Facebook, but also many other communications, social media and content repository services (Clarke 2008a, 2011). Far greater effort needs to be invested by regulatory agencies in imposing fair conditions on the consumer-manipulative Terms of Service and 'Privacy Policies' imposed by these organisations; and by advocacy and support organisations in communicating the threats and vulnerabilities inherent in the services, and the needs of their users.

There is also an ongoing need for providers of alternative software and services to deliver consumer-friendly features and to promote them effectively to consumers - especially to those consumers most likely to need and to benefit from privacy-enhancing technologies. An analysis of the needs of Consumer-Oriented Social Media is in Clarke (2014b), and a project on a relatively-secure, privacy-friendly web-browser is outlined in Clarke (2016).


5. Conclusions

The physical world features many forms of unpleasant behaviour. The virtual world arising from the burst of technological innovation since c. 1975 and widely available to the public since c. 1995 has created many new potentials for adapted and new forms of unpleasantness. Some physical world behaviours are sufficiently harmful to individuals, communities, the economy or the State that they have long been subject to the criminal law. During the last two decades, adjustments have been made to criminal law to ensure that it extends to similarly harmful behaviour in cyberspace.

A great many instances of unpleasant behaviour fall below the threshold test of 'sufficient harm'. They should not be criminalised, but rather should be the subject of social controls. Individuals who are subject to them also need to take advantage of a range of technical tools whereby they can protect themselves against potentially harmful behaviour. This depends on well-designed tools, education and training on how to use them, and technical competency on the part of users. It also presumes the will to address problems. The chapter has criticised the characterisation of the targets of unpleasant behaviour as 'victims', because of the associated aura of powerlessness and the submissiveness that this engenders. Much more emphasis is needed on self-help by groups and individuals than on criminalisation and other forms of dependence on the State to exercise control over behaviour.

Privacy-enhancing technologies will of course be used for sociopathic and psychopathic purposes, as well as for socially, economically and politically positive reasons. It would be foolish if the use of oxygen and bananas by sinners were to cause saints to shun oxygen and bananas. Similarly, individuals and groups afflicted by unpleasant cyberspace behaviour need to become familiar with PETs, apply them, actively criticise providers of products and services that do not incorporate PETs, and actively encourage consumer-friendly features, software products and services.


Appendix 1: Categories of Unpleasant Cyberspace Behaviour


References

Bartlett J. (2014) 'The Dark Net: Inside the Digital Underworld' William Heinemann, 2014

BestVPN (2016) 'Ultimate Privacy Guide' 4Choice, multiple versions since 2013, at https://www.bestvpn.com/the-ultimate-privacy-guide/

Clarke R. (1995) 'Net-Ethiquette: Mini Case Studies of Dysfunctional Human Behaviour on the Net' Xamax Consultancy Pty Ltd, February 1995, at http://www.rogerclarke.com/II/Netethiquettecases.html

Clarke R. (1997a) 'Encouraging Cyberculture' Invited Address, Proc. CAUSE in Australasia '97, Melbourne, April, 1997, at http://www.rogerclarke.com/II/EncoCyberCulture.html

Clarke R. (1997b) Cyberculture: Towards the Analysis That Internet Participants Need' Xamax Consultancy Pty Ltd, September 2011, at http://www.rogerclarke.com/II/CyberCulture.html

Clarke R. (2001) 'Introducing PITs and PETs: Technologies Affecting Privacy' Privacy Law & Policy Reporter 7, 9 (March 2001), PrePrint at http://www.rogerclarke.com/DV/PITsPETs.html

Clarke R. (2008a) 'B2C Distrust Factors in the Prosumer Era' Invited Keynote, Proc. CollECTeR Iberoamerica, Madrid, 25-28 June 2008, pp. 1-12, PrePrint at http://www.rogerclarke.com/EC/Collecter08.html

Clarke R. (2008b) 'Dissidentity: The Political Dimension of Identity and Privacy' Identity in the Information Society 1, 1 (December, 2008) 221-228, PrePrint at http://www.rogerclarke.com/DV/Dissidentity.html

Clarke R. (2011) 'The Cloudy Future of Consumer Computing' Proc. 24th Bled eConference, June 2011, PrePrint at http://www.rogerclarke.com/EC/CCC.html

Clarke R. (2014a) 'Key Factors in the Limited Adoption of End-User PETs' Proc. Politics of Surveillance Workshop, University of Ottawa, May 2014, at http://www.rogerclarke.com/DV/UPETs-1405.html

Clarke R. (2014b) 'The Prospects for Consumer-Oriented Social Media' Proc. Bled eConference, June 2014, at http://www.rogerclarke.com/II/COSM-1402.html

Clarke R. (2015) 'The Prospects of Easier Security for SMEs and Consumers' Computer Law & Security Review 31, 4 (August 2015) 538-552, PrePrint at http://www.rogerclarke.com/EC/SSACS.html

Clarke R. (2016) 'A Relatively-Secure, Privacy-Friendly Browser' Australian National University, 2016, at https://cs.anu.edu.au/research/student-research-projects/relatively-secure-privacy-friendly-browser

Clarke R. & Maurushat A. (2007) 'The Feasibility of Consumer Device Security' J. of Law, Information and Science 18 (2007), PrePrint at http://www.rogerclarke.com/II/ConsDevSecy.html

CoE (2001) 'Convention on Cybercrime' Council of Europe, November 2001, at http://www.coe.int/en/web/conventions/full-list/-/conventions/treaty/185

Collingwood L. & Broadbent G. (2015) 'Offending and being offended online: Vile messages, jokes and the law' Computer Law & Security Review 31, 6 (December 2015) 763-772

EFF (2016) 'Surveillance Self-Defense' Electronic Frontier Foundation, various versions since 2009, at https://ssd.eff.org/

EPIC (2016) 'EPIC Online Guide to Practical Privacy Tools', Electronic Privacy Information Center, Washington DC, various versions since 1997, at http://www.epic.org/privacy/tools.html

Fassin D. & Rechtman R. (2009) 'The Empire of Trauma: An Inquiry Into the Condition of Victimhood' Princeton University Press, 2009

GFW (2011) 'Who is harmed by a "Real Names" policy?' Geek Feminism Wiki, undated, apparently of 2011, at http://geekfeminism.wikia.com/wiki/Who_is_harmed_by_a_%22Real_Names%22_policy%3F

Gibbons L.J. (1997) 'No Regulation, Government Regulation, or Self-Regulation: Social Enforcement or Social Contracting for Governance in Cyberspace' Cornell Journal of Law and Public Policy 6, 3 (1997), at http://scholarship.law.cornell.edu/cjlpp/vol6/iss3/1

IPCR (1995) 'Privacy-Enhancing Technologies: The Path to Anonymity' Information and Privacy Commissioner (Ontario, Canada) and Registratiekamer (The Netherlands), 2 vols., August 1995, at http://www.ipc.on.ca/web%5Fsite.eng/matters/sum%5Fpap/papers/anon%2De.htm

Kissell J. (2014) 'Take Control of Your Online Privacy' TidBits Publishing, 2014, at https://cache.agilebits.com/dist/Newsletters/2014-12/Take%20Control%20of%20Your%20Online%20Privacy.pdf

Lee M. (2013) 'Encryption Works: How to Protect Your Privacy in the Age of NSA Surveillance' Freedom of the Press Foundaiton, 2 July 2013, at https://pressfreedomfoundation.org/encryption-works

McLaughlin J.H. (2010 'Crime and Punishment: Teen Sexting in Context' Penn State Law Rev. 115, 1 (2010) 135-181, at http://www.pennstatelawreview.org/115/1/115%20Penn%20St.%20L.%20Rev.%20135.pdf

Ong R. (2015) 'Cyber-bullying and young people: How Hong Kong keeps the new playground safe' Computer Law & Security Review 31, 5 (October 2015) 668-678

PrismBreak (2016) 'Prism Break' Peng Zhong, multiple versions since 2013, at https://prism-break.org/en/

Rheingold H. (1993) 'The Virtual Community: Homesteading on the Electronic Frontier' Secker and Warburg, 1993, at http://www.well.com/user/hlr/vcbook/

Svantesson F. (2010) 'Sexting' and The Law - How Australia Regulates Electronic Communication of Non-Professional Sexual Content' Bond Law Review 22, 2, Article 3., at http://epublications.bond.edu.au/blr/vol22/iss2/3

Waerner C. (2000) 'Thwarting Sexual Harassment on the Internet' Brian Martin's Bullying and Harassment Resources, Undated but apparently of 2000, at http://www.bmartin.cc/dissent/documents/Waerner1.html

Wortley R. & Smallbone S. (2006) 'Child Pornography on the Internet' Giude No. 41, Center for Problem-Oriented Policing, 2006, at http://www.popcenter.org/problems/child_pornography/print/


Author Affiliations

Roger Clarke is Principal of Xamax Consultancy Pty Ltd, Canberra. He is also a Visiting Professor in Cyberspace Law & Policy at the University of N.S.W., and a Visiting Professor in the Computer Science at the Australian National University.



xamaxsmall.gif missing
The content and infrastructure for these community service pages are provided by Roger Clarke through his consultancy company, Xamax.

From the site's beginnings in August 1994 until February 2009, the infrastructure was provided by the Australian National University. During that time, the site accumulated close to 30 million hits. It passed 65 million in early 2021.

Sponsored by the Gallery, Bunhybee Grasslands, the extended Clarke Family, Knights of the Spatchcock and their drummer
Xamax Consultancy Pty Ltd
ACN: 002 360 456
78 Sidaway St, Chapman ACT 2611 AUSTRALIA
Tel: +61 2 6288 6916

Created: 11 September 2015 - Last Amended: 2 February 2016 by Roger Clarke - Site Last Verified: 15 February 2009
This document is at www.rogerclarke.com/II/ECC.html
Mail to Webmaster   -    © Xamax Consultancy Pty Ltd, 1995-2022   -    Privacy Policy