Roger Clarke's Web-Site

© Xamax Consultancy Pty Ltd,  1995-2024
Photo of Roger Clarke

Roger Clarke's 'Dataveillance - Thirty Years On'

From Dataveillance to Ueberveillance

Submission Version of 8 May 2013

Interview in Michael K. & Michael M.G. (eds.) 'Uberveillance and the Social Implications of Microchip Implants: Emerging Technologies', IGI Global, 2014, pp. 18-31

Roger Clarke **

© Xamax Consultancy Pty Ltd, 2013

Available under an AEShareNet Free
for Education licence or a Creative Commons 'Some
Rights Reserved' licence.

This document is at http://www.rogerclarke.com/DV/DV13.html


The interview was between Michael G. Michael (MGM) and Roger Clarke (RC).

MGM: When and for what reasons did your interest in privacy begin?

RC: In 1971, I was working in the (then) computer industry, and undertaking a 'social issues' unit towards my degree. A couple of chemical engineering students made wild claims about the harm that computers would do to society. After spending time debunking most of what they said, I was left with a couple of points that they'd made about the impact of computers on privacy that were both realistic and serious. I've been involved throughout the four decades since then, as consultant, as researcher and as advocate.

There are various levels at which the privacy need evidences itself (Clarke 1997, 2006b). Many people tend to focus on the psychological aspects. People need private space, not only behind closed doors and drawn curtains, but also in public. At a social level, people need to be free to behave, and to associate with others, subject to broad social mores, but without the continual threat of being observed. Otherwise we reduce ourselves to the inhumanly constraining impositions suffered by people in countries behind the Iron and Bamboo Curtains. There is also an economic dimension, because people need to be free to invent and innovate. International competition is fierce, so countries with high labour-costs need to be clever if they want to sustain their standard-of-living; and cleverness has to be continually reinvented.

My strongest personal motivations, however, have been at the political level. People need to be free to think, and argue, and act. Surveillance chills behaviour, association and speech, and hence threatens democracy. Our political well-being depends not only on the rule of law and freedom of speech, but also on privacy. The term 'dissidentity' draws together seemingly disparate ideas about the sanctity of the ballot box, whistleblower protections, and the political deviants who intellectually challenge established doctrines, policies, institutions or governments (Clarke 2008).

How do you define or understand privacy.

Privacy is a human right, and the decades of attempts by US business interests to reduce it to a mere economic right have failed, and must fail. On the other hand, definitions that are built around 'rights' are very difficult to apply. And of course the definitions in data protection statutes around the world are so narrow and legalistic that they provide no basis for meaningful discussions.

Privacy is most usefully defined as an 'interest' - the interest that individuals have in sustaining a 'personal space', free from interference by other people and organisations (Morison 1973, Clarke 1997, Clarke 2006b).

Privacy has multiple dimensions (Clarke 1997). Data privacy is only one of them, but it attracts almost all of the attention. Particularly when discussing surveillance, it's essential that all of the dimensions be considered, not just the protection of personal data.

Personal communications are under observation as never before. And longstanding protections have been obliterated by a combination of technological accidents and the fervour of national security extremists. The freedom of personal behaviour is greatly undermined by many different forms of monitoring. The intrusiveness of some techniques is now reaching the point that we need to start talking about the privacy of personal experiences (e.g. what you read), and even of attitudes and thoughts. Finally, privacy of the physical person is not only negatively affected by the inroads made in the other dimensions, but is also under direct attack.

How do you define dataveillance? How did its definition come about?

By the middle of the 1980s, I was frustrated with the ongoing superficiality of the conversation. The discussion was still about 'computers and privacy', even though it was clear that both technologies and their applications were developing rapidly, and that deeper analysis was essential.

I coined 'dataveillance' with several purposes in mind:

Hence dataveillance was, and is, "the systematic monitoring of people's actions or communications through the application of information technology" (Clarke 1986, 1988, 2003).

It was clear in the 1980s that the monitoring of people through the data trails that they generate is far more economically efficient than previous forms of surveillance, and that it would be adopted rapidly and widely. As we know all too well, it has been.

The basic theory of dataveillance needed to be articulated in a number of areas, in particular identity (Clarke 1994b, 1999b), the digital persona (Clarke 1994a, 2013c) and national identification schemes (Clarke 1987, 1992, 2006a, APF 2006). Because industry initiatives in such areas as digital signatures, identity authentication and identity management areas have embodied serious misunderstandings about key concepts, it has also proven necessary to express the consolidated model of entity, identity and (id)entity authentication and authorisation (Clarke 2009a).

Do you understand 'dataveillance' a little differently today than when you first introduced the idea to the world? What are the principal links between dataveillance and uberveillance?

The definition of 'dataveillance' appears to me to be as appropriate now as it was 25 years ago. On the other hand, the technologies have changed, and the organisations with a vested interest in promoting and conducting dataveillance have grown in number, in size, and in institutional and market power. So the importance of the discussion is far greater even than it was at that time.

In relation to surveillance more generally, I've had to adapt my thinking over time. Back in the mid-1980s, I distinguished:

There have been a number of technological developments that weren't anticipated in my original dataveillance paper, and I've consequently had to broaden my scope (Clarke 2009b, 2010a). Here are some of the important aspects that have emerged during the last 25 years:

As surveillance forms proliferated and were integrated, it became important to develop an over-arching concept. This emerged as 'ueberveillance' (Michael 2006, Michael & Michael 2006). There are several elements within the notion (Clarke 2010a). The apocalyptic - but regrettably, not unrealistic - interpretation is of surveillance that applies across all space and all time (omni-present), and supports some organisation that is all-seeing (omnivident) and even all-knowing (omniscient), at least relative to some person or object. Another sense of 'ueber' is 'exaggerated' or 'disproportionate', perhaps because the justification for the intrusion is inadequate, or the scope is excessive. The third interpretation is as 'meta', 'supra' or 'master', implying that information from multiple surveillance streams is consolidated, coordinated or centralised.

Where do you stand on the "dystopian" genre? I.e. Brave New World or 1984, etc. Are these works a genuine and helpful critique of our political system and technological fixation?

I've drawn heavily on many aspects of dystopian and sci-fi literatures, not only in the surveillance arena, but in other areas as well (Clarke 1993a, 1993b, 2005, 2009c). For example, I'm currently re-reading 'The Diamond Age' (Stephenson 1995), because it's easily the most effective experiment yet performed on the impact of miniaturised 'aero-stats', and is consequently valuable pre-reading when preparing contributions to the current debates on drones.

The single most important work is 'We' (Zamyatin 1922). Orwell's '1984' was heavily derivative from Zamyatin, but it built on experience of both Stalin's regime and the UK's war-time Ministry of Information. In recent decades, the cyberpunk genre (of which Stephenson is a mature exponent) has been a rich source of ideas.

When reading such literatures, it's very important to keep the art and the hard-headed policy analysis distinct from one another. Art isn't constrained by the laws of physics, nor by the 'laws' of economics. It provides fore-warning, and pre-thinking, and in many cases extrapolation to the consequences of extremist forms of a technology or an application, or exploitation of them by extremist governments, corporations and political movements.

The extrapolation aspect once seemed to me to be of limited value, because I thought I lived in a world in which extremism would be largely kept in check. The last few decades have shown that sentiment to be naîve. Parliaments have retreated from their responsibilty to regulate both the business sector and the Executive. And a handful of successful terrorist strikes has caused once relatively free nations to eat themselves, by implementing measures that were previously associated only with repressive regimes. So, unfortunately, even some of the more extreme ideas in the dystopian and sci-fi literatures have relevance.

I know you are not a fan of casting the net too far into the future, but do you see a time when the microchipping of humans, whether it be for medical or commercial or national security purposes, will become routine? And if yes, how would such a state of affairs impact on privacy?

I describe my philosophical stance as 'positively agnostic', by which I mean that there are questions that humans simply cannot even begin to answer. The source of matter, and of the various forms it takes, and the existence of G/god(s), are beyond me, and, I contend, are beyond other mere humans as well. Everyone is welcome to their spiritual framework and beliefs; but policy matters are much better dealt with in closer contact with the world we're living in. Along with metaphysical questions, I see all of us as being incapable of making judgements about distant human futures.

On the other hand, some technologies, and some applications, are readily extrapolated from the present, and some of the simpler disjunctions and shifts can be at least mulled over. In 1992-93, I had to enter into a lengthy discussion with the Editor of Information Technology & People, in order to be permitted to retain in a paper (Clarke 1994a) a passage that she regarded as alarmist and technically unrealistic:

"It has been technically feasible for some time, and is increasingly economically practicable, to implant micro-chips into animals for the purpose of identification and data-gathering. Examples of applications are to pets which may be stolen or lost, breeding stock, valuable stock like cattle, and endangered species like the wandering albatross. As the technology develops, there will doubtless be calls for it to be applied to humans as well. In order to discourage uncooperative subjects from removing or disabling them, it may be necessary for them to be installed in some delicate location, such as beside the heart or inside the gums"

My understanding is that the first chip-implantation in animals was in 1991, and the first in humans in 1998 - at that stage voluntarily, but with some subsequent instances nominally consensual but actually under duress. As short a lag as 5-10 years from prediction to reality doesn't justify even the term 'prescient', let alone 'visionary'; yet I had to argue strongly to get the passage published. It's remarkable how quickly technology salespeople managed to convert the idea from the unimaginable to an element of 21st century slavery.

Chips with moderate computational power, storage and communications capabilities are already embedded in a great many devices. Already some of those devices are embedded in humans. Pacemakers are being joined by replacement limbs and joints, which contain chips that, at the very least, perform functions relating to balance. Many products will carry chips in order to support maintenance (e.g. aircraft components) and product recall (e.g. potentially dangerous consumer goods), and it's highly likely that materials used in operating theatres (e.g. swabs) and endo-prostheses and endo-orthoses (e.g. titanium hips, stents) will carry them as well.

In addition to medical applications, two other contexts stand out. One is fashion, not only for technophiles but also for hedonists, such as night-club patrons. The other is control over the institutionalised. Anklets with embedded chips are applied to felons in gaols, to parolees, and even to people on bail and to people who have served their time but are deemed dangerous or undesirable. In a few short years, the practice leapt from an unjustifiably demeaning imposition to an economically motivated form of retribution and stigma. Chipping of the mentally handicapped, the aged and the young can be justified with no more effort than was needed to apply it to people on bail. The migration from anklet to body will be straightforward, based no doubt on the convenience of the subject, dressed up as though it were a choice that they made.

There are virtually no intrinsic controls over such developments, and virtually no legal controls either. For example, the US Food and Drug Administration's decision that there was no health reason to preclude the insertion of chips in humans was rapidly parlayed by the industry into 'approval' for chip implantation. There are many serious concerns about imposition by corporations and the State on individuals' data, communications, behaviour and bodies; but few are as directly intrusive as the prospect of the normalisation of insertion of computing and communication devices into humans.

My work on cyborgisation was originally predicated on the assumption that, for the reasonably foreseeable future, our successors would be technology-enhanced, but very much humans (Clarke 2005). Regrettably, it's become necessary to recognise a strong tendency for technology to be applied so as to demean, to deny self-determination, and to impose organisational dictates on individuals (Clarke 2011b).

Do you believe those engaged in auto-ID technology distinguish between the locating and tracking of objects and bodies and the monitoring the mind?

I've previously used the term 'sleep-walking' for the manner in which people have overlooked the march of surveillance technologies: "the last 50 years of technological development has delivered far superior surveillance tools than Orwell imagined. And we didn't even notice" (Clarke 2001). Others prefer the 'warm frog' or the 'zombie' metaphor.

During the Holocaust of 1940-45, each of the successive impositions was in itself grossly inhuman, but also cumulative. This culminated in vast numbers of people trudging into their place of execution. In this case, on the other hand, the imposition is initially pitched as being exciting, fashionable and convenient. The innately human qualities of respecting apparent authority, and of blindly trusting, results in people becoming numbed, inured, and accepting.

The sceptic, and the analyst, have no trouble recognising that location data becomes tracking data, and enables inferences to be drawn about each person's trajectory through physical space, including their likely destinations (Clarke 1999c, Clarke & Wigan 2011, Michael & Clarke 2013). Similarly, some categories of people understand that rich data-sets enable inferencing about a person's interests, social networks, and even attitudes, intentions and beliefs. Some have even noticed that digital books and newspapers, the aggressive exploitation of intellectual property laws, rampant 'counter-terrorism' laws, and the migration of electronic content from personal possessions to cloud-storage, have together given rise to the contemporary reality of 'experience surveillance'.

For most people, however, such things are the stuff of novels, not part of their world. The suggestion that something is wrong is unwelcome, because it intrudes into their enjoyment of a world that they assume is here to stay and theirs to exploit. Most social issues are of interest only to an intelligentsia. Moreover, abstract 'mind surveillance' matters are much more difficult to convey in a graphic manner than, say, starving children living in deserts, street-people, indigenous young people living in squalor, and refugee families imprisoned behind razorwire.

Because of my background in technology, I'm uncomfortable using terms such as 'mind-monitoring' to refer to this rapidly developing aspect of surveillance. There is no doubt, however, that we must find ways to convey to a much broader public how much insight organisations and their data-gorging inference engines are gaining into our individual psyches.

Are you an optimist insofar as privacy surviving? There are those who are already speaking of privacy in terms of it being dead. What is your response to this?

There are a number of variants of the 'privacy is dead' meme (Clarke 2003, 2009c):

There are straightforward responses to all of these schools of thought. Underlying them are the facts that privacy is a human need, and that human needs don't 'survive'; they just are. For any of these arguments to be accepted, a fundamental change in the human psyche would have had to occurred just because CCTV, search engines and/or social networking services had been invented.

People who believe that 'we're all sinners' would welcome a post-privacy era on the basis that 'you only need privacy if you have something to hide'. But there's a problem with that proposition. Quite simply, everyone has things to hide (APF 2006). Not least, when anti-privacy advocates put their heads over the parapet, they tend to need to hide such things as their bank accounts, their blogs and Twitter accounts, and even their whereabouts.

Similarly, McNealy (previously of Sun), Schmidt (Google) and Zuckerberg (Facebook), like celebrities everywhere, continue to keep their passwords and PINs to themselves, and for the most part obscure their whereabouts and their travel plans. The fact that they do so successfully suggests that 'you have zero privacy' is something of an over-statement.

David Brin called for ubiquitous transparency, on the grounds that he sees it as a better form of protection of freedoms than secrecy (Brin 1998). His is essentially an argument that sousveillance will solve all ills. It's an attractive thesis, but it's based on some key assumptions. It would require the enforcement of open practices on law enforcement agencies and operators of surveillance apparatus such as CCTV control rooms. In effect, we need the powerful to be convinced, or forced, to do what they have seldom done: not exercise their power. That idea is, quite simply, fanciful. It's one thing to switch the focus from hiding data to providing people with control over their own data; but it's quite another to suggest that the privacy protections that have been achieved should be abandoned in favour of a naive notion that 'the weak shall become powerful'.

As to the fourth form of the 'privacy is dead' meme, the myth that privacy attitudes of the new generation are different has arisen from a very basic misunderstanding. Middle-aged people look at young people, perceive them to be different from middle-aged people, and conclude that therefore the new generation is different. What's needed is a comparison of young people with what middle-aged people were like when they were young, and preferably (although with greater difficulty) with what middle-aged people would have been like when they were young if they had experienced the same conditions as young people do now.

Every cohort, while young, takes risks. Every cohort becomes progressively more risk-averse as it gets older and gains more things to hide. These include assets, such as possessions worth stealing and informational assets from which they extract advantages. Things worth hiding also include liabilities, such as financial and psychic debts, and informational liabilities such as a stigmatised medical condition, a criminal record, a failed study-course, a failed relationship.

Because of the naive use of social media since about 2004, many people are being bitten by the exposure of embarrassing information and images, or are gaining vicarious experience of other people's misfortunes. So the reasonable expectation is that the iGeneration, i.e. those born since the early 1990s, will be more privacy-sensitive than their predecessors, not less (Clarke 2009d).

Privacy isn't 'surviving', and won't 'survive'. It just 'is', and will continue to 'be'.

Turning now to the issue of how we save privacy ... How would most privacy experts understand "ethics" in their work? How important is the question of ethics in this debate, or is this a question that will become increasingly redundant?

From the very outset, privacy protection has been conceived so as to primarily serve corporate and government interests, rather than human values (Clarke 2000). The agenda was set by Columbia University economist, Alan Westin (Westin 1967, 1971, Westin & Baker 1974). He developed the notion of 'fair information practices' (often referred to as FIPS). FIP-based privacy regimes have been described as an 'official response' which was motivated by 'administrative convenience', and which legitimated dataveillance measures in return for some limited procedural protections (Rule 1974, Rule et al. 1980).

The institution that published the first international set of Data Protection Principles was formed to address economic not social issues. Its highly influential document (OECD 1980) consequently codified FIPS, and embedded the dominance of institutional over individual interests. The EU subsequently strengthened the protections; whereas the US cut the OECD Guidelines down even further, in the form of a 'safe harbor' for American corporations (USDOC 2000), and has been seeking to weaken them even further by using Australia and other members of the Asia-Pacific Economic Cooperation to create an APEC Privacy Framework (APEC 2005) that offers consumers and citizens even less than the seriously inadequate 'safe harbor' formulation.

It would therefore be a very welcome new development if privacy protections were to be conceived on the basis of an ethical analysis that puts people's interests before those of governments and corporations. Unfortunately, such a change appears highly unlikely.

In any case, ethics is seen by most people as being primarily confined to abstract judgements about good and evil. Ethical analyses are valuable as a component of ex post facto evaluations of actions that have been already taken, and reviews of institutional structures, processes and decision criteria. But there are doubts as to whether ethics ever have volitional or motivational power, and hence influence the behaviour of organisations (Clarke 1999d).

Is there a place for religious sensitivities in the dataveillance and uberveillance debate? The taboo of not making mention, for instance, of the anxiety over the `branding' or the microchipping of humans has to a large extent been lifted and writers are engaging with this question from not only a civil libertarian point of view but also from a religious point of view. Is this contribution to the debate to be welcomed?

The Book of Revelation is expressed in mystical style, and the notion of 'the mark of the beast' can be interpreted in a great many ways. Viewed from a secular perspective, the intensity of the expression, and of its interpretation by many Christians, reflects the revulsion felt by many people about physical intrusions into their selves, and the exercise of power over their behaviour by a malevolent force.

Personally, I feel discomfort when people use '666' symbolism. I prefer to focus on evidence from our experience of the physical world, rather than ascientific assertions. I also doubt whether many of the uncommitted are won over by such arguments. But I can't and don't deny the legitimacy of approaches other than my own. The horror of impositions on our physical selves will be evidenced in many ways, and communicated in many ways.

So are we largely dependent on legislation, self-regulation, or some sort of "default" in the technology?

There's no doubt that we need a network of interacting protections - natural, organisational, technical and regulatory - designed so as to be mutually reinforcing.

Many of the natural protections have been undermined by such changes as the digitisation of content, increases in transmission bandwidths, and greatly reduced costs. Nonetheless, some remain, such as the self-interest of organisations, and competition among them. A variety of organisational protections suit the needs of individuals, as well as the companies and government agencies that implement them, such as data integrity and data security safeguards.

A great many technical protections exist, and more are being developed all the time. The problem is that most of the developers are employed by organisations that seek to invade privacy and exploit personal data. So for every consumer-protective safeguard that's produced, there are scores of privacy-invasive features and products, and many countermeasures against the safeguards. W3C designed far more serious privacy-invasive features into HTML5 than it did privacy-protective features. And Mozilla and Firefox are similarly marketer-friendly and consumer-hostile.

As an antidote to this malaise, I've argued that privacy advocacy organisations need to publish their own Standards, to compete with the Standards written by industry and government to satisfy their own needs (Clarke 2010b). A document such as the 'Policy Statement re Visual Surveillance' of the Australian Privacy Foundation (APF 2009) could provide a basis for a Civil Society Standard. A generic set of Meta-Principles for Privacy Protection is enunciated in APF (2013).

With market failure so evident, it would be expected that regulation would be imposed. In many countries, however, legislatures have been failing their duty to protect the public against excesses by corporations and government agencies. The chimera of 'self-regulation' has been invoked, as though it could be effective as well as efficient. It has uniformly failed to satisfy the interests of the public. Quite simply, wolves self-regulate for the benefit of wolves, not sheep. And in any case there are many loner wolves out of the reach of the associations whose 'industry codes' create the pretence of a privacy-protective regime.

Faced with mounting sceptiscism about self-regulation, partnerships between government and business have invented the term 'co-regulation'. The concept has merit, but only if it is implemented in a meaningful manner, including a legislative framework that stipulates a satisfactory set of privacy principles, delegated codes that bind all parties, and a watchdog agency with enforcement powers and resources (Clarke 1999a).

Some countries, primarily in Europe, have Data Protection Commissioners that have some enforcement powers at least, and that can therefore be regarded as regulators - although they have little or no coverage of privacy of personal communications or behaviour, nor of privacy of the physical person. Some countries, such as Australia, have a 'watch-dog' oversight agency rather than a 'guard-dog' regulator, because the organisation lacks power, and in many cases resources as well. Such agencies typically fail to even fully exercise the influence that they could have, e.g. by failing to operationally define public expectations in even the most straightforward areas such as data security safeguards (APF 2012, Clarke 2013b). Some countries, such as the USA, lack even oversight.

Legislatures will only impose requirements on organisations, and will only empower regulators (and force them to do their jobs) to the extent that the public makes clear that that's what they expect. Consumer and privacy advocacy organisations need to mobilise much more activity, coordinate it, and project it through the media, in order to achieve visibility.

Ultimately, `who will guard the guards themselves'?

The full version of the aphorism is:

'Quis custodiet ipsos custodes? Ecce, ipsi quos custodiunt custodes' 'Who will guard the guardians themselves? Lo, the very ones whom the guardians guard'

On the one hand, this can be interpreted as an argument for the merits of sousveillance. On the other, it underlines the networked nature of effective democratic systems, whereby all powers are subject to controls, but those controls are a web rather than a simple hierarchy, and 'the public' are part of that web, through such means as periodic elections, petitions, citizen initiatives, recalls, referenda, civil disobedience, demonstrations, and revolutions.

I've long used an aphorism that is a distant relation of the Latin dictum above:

Privacy doesn't matter until it does

By this I mean that most people, most of the time, accept what happens. If what happens is unfriendly, their acceptance is a bit sullen. Whilever their disenchantment remains below some critical threshold, they simply bear a bit of a grudge. When that threshold is exceeded, however, action happens. And when the feeling is held by many people, the action is like a dam-wall breaking - swift, vicious and frequently decisive (Clarke 1996, 2006c). Large numbers of corporate privacy disasters attest to that (Clarke 2013a), as do multiple failed national identification schemes, and hundreds of knee-jerk privacy laws throughout the USA and its constituent States.

What is the role of privacy advocacy groups such as Privacy International and the Australian Privacy Foundation in the debate with political and corporate entities?

A substantial set of privacy advocacy organisations exists around the world (Bennett 2008, PAR 2013), together with powerful voices such as the Washington-based Electronic Privacy Information Center (EPIC) and London-based Privacy International (PI).

Advocacy organisations aim for a future in which all organisations collect, use, retain and disclose personal data only in ways that are transparent, justified and proportionate, and subject to mitigation measures, controls and accountability (APF 2013). In order to achieve that condition, a number of enabling measures are necessary (Clarke 2012). Evaluation of proposals is essential, in accordance with the accumulated knowledge about Privacy Impact Assessments (Clarke 2011a, de Hert & Wright 2012).

To contribute to evaluations and achieve privacy-positive outcomes, advocacy organisations need to conduct research, establish policy statements, write submissions, give evidence, and advocate the public interest verbally in meetings and in the media. That requires all of the accoutrements of a civil society organisation, including an appropriate constitution, governance, business processes, and resources. Where necessary, an advocacy organisation must conduct campaigns for enhanced laws, or (far too often) against projects such as national identification schemes, unjustified cyber-surveillance and visual surveillance, and excessive media uses of surveillance.

It's seriously challenging to attract enough people with sufficient expertise and energy. Experience has shown, however, not only in the USA, the UK, Germany, Canada and Australia, but also in many other counties, that advocacy organisations that are run with professionalism and vigour can have very substantial impacts on policy debates, enthuse the media, mobilise the public, and cause politicians to ask hard questions, and to act.


References

APEC (2005) 'APEC Privacy Framework' Asia-Pacific Economic Cooperation, 2005, at http://www.apec.org/Groups/Committee-on-Trade-and-Investment/~/media/Files/Groups/ECSG/05_ecsg_privacyframewk.ashx

APF (2006) 'If I've got nothing to hide, why should I be afraid?' Australian Privacy Foundation, December 2006, at http://www.privacy.org.au/Resources/PAS-STH.html

APF (2009) 'Policy Statement re Visual Surveillance, incl. CCTV' Australian Privacy Foundation, October 2009, at http://www.privacy.org.au/Papers/CCTV-1001.html

APF (2012) 'Policy Statement on Information Security' Australian Privacy Foundation, December 2012, http://www.privacy.org.au/Papers/PS-Secy.html

APF (2013) 'Meta-Principles for Privacy Protection' Australian Privacy Foundation, March 2013, at http://www.privacy.org.au/Papers/PS-MetaP.html

Bennett C. (2008) 'The Privacy Advocates: Resisting the Spread of Surveillance' MIT Press, 2008

Brin D. (1998) 'The Transparent Society' Addison-Wesley, 1998

Clarke R. (1986) 'Information Technology and 'Dataveillance' Proc. Symp. on Comp. & Social Responsibility, Macquarie Uni. Dept of Comp. Sci., September 1986

Clarke R. (1987) ''Just Another Piece of Plastic for Your Wallet': The Australia Card' Prometheus 5,1 June 1987 Republished in Computers & Society 18,1 (January 1988), with an Addendum in Computers & Society 18,3 (July 1988), at http://www.rogerclarke.com/DV/OzCard.html

Clarke R. (1988) 'Information Technology and Dataveillance' Comm. ACM 31,5 (May 1988) Re-published in C. Dunlop and R. Kling (Eds.), 'Controversies in Computing', Academic Press, 1991, at http://www.rogerclarke.com/DV/CACM88.html

Clarke R. (1992) 'The Resistible Rise of the National Personal Data System' Software Law Journal 5,1 (January 1992), at http://www.anu.edu.au/people/Roger.Clarke/DV/SLJ.html

Clarke R. (1993a) 'A 'Future Trace' on Dataveillance: Trends in the Anti-Utopia / Science Fiction Genre' Xamax Consultancy Pty Ltd, March 1993, at http://www.rogerclarke.com/DV/NotesAntiUtopia.html

Clarke R. (1993b) 'Asimov's Laws of Robotics Implications for Information Technology' IEEE Computer 26,12 (December 1993) 53-61 and 27,1 (January 1994) 57-66, at http://www.rogerclarke.com/SOS/Asimov.html

Clarke R. (1994a) 'The Digital Persona and Its Application to Data Surveillance' The Information Society 10,2 (June 1994), at http://www.rogerclarke.com/DV/DigPersona.html

Clarke R. (1994b) 'Human Identification in Information Systems: Management Challenges and Public Policy Issues' Info. Technology & People 7,4 (December 1994), at http://www.rogerclarke.com/DV/HumanID.html

Clarke R. (1996) 'Privacy, Dataveillance, Organisational Strategy' (the original version was a Keynote Address for the I.S. Audit & Control Association Conf. (EDPAC'96), Perth, 28 May 1996). At http://www.rogerclarke.com/DV/PStrat.html

Clarke R. (1997) 'Introduction to Dataveillance and Information Privacy, and Definitions of Terms' Xamax Consultancy Pty Ltd, August 1997, at http://www.rogerclarke.com/DV/Intro.html

Clarke R. (1999a) 'Internet Privacy Concerns Confirm the Case for Intervention', Communications of the ACM, 42, 2 (February 1999) 60-67, at http://www.rogerclarke.com/DV/CACM99.html

Clarke R. (1999b) 'Anonymous, Pseudonymous and Identified Transactions: The Spectrum of Choice', Proc. IFIP User Identification & Privacy Protection Conference, Stockholm, June 1999, at http://www.rogerclarke.com/DV/UIPP99.html

Clarke R. (1999c) 'Person-Location and Person-Tracking: Technologies, Risks and Policy Implications' Proc. 21st International Conf. Privacy and Personal Data Protection, Hong Kong, September 1999. Revised version published in Info. Techno. & People 14, 1 (2001) 206-231, at http://www.rogerclarke.com/DV/PLT.html

Clarke R. (1999d) 'Ethics and the Internet: The Cyberspace Behaviour of People, Communities and Organisations' Proc. 6th Annual Conf. Aust. Association for Professional and Applied Ethics, Canberra, 2 October 1999. Revised version published in Bus. & Prof'l Ethics J. 18, 3&4 (1999) 153-167, at http://www.rogerclarke.com/II/IEthics99.html

Clarke R. (2000) 'Beyond the OECD Guidelines: Privacy Protection for the 21st Century', Xamax Consultancy Pty Ltd, January 2000, at http://www.rogerclarke.com/DV/PP21C.html

Clarke R. (2001) 'While You Were Sleeping ... Surveillance Technologies Arrived', Australian Quarterly 73, 1 (January-February 2001) 10-14, at http://www.rogerclarke.com/DV/AQ2001.html

Clarke R. (2003) 'Dataveillance - 15 Years On' Invited Presentation at the Privacy Issues Forum run by the New Zealand Privacy Commissioner, Wellington, 28 March 2003, at http://www.rogerclarke.com/DV/DVNZ03.html

Clarke R. (2005) 'Human-Artefact Hybridisation: Forms and Consequences' Prepared for an Invited Presentation to the Ars Electronica 2005 Symposium on Hybrid - Living in Paradox, Linz, Austria, 2-3 September 2005, at http://www.rogerclarke.com/SOS/HAH0505.html

Clarke R. (2006a) 'National Identity Schemes - The Elements' Xamax Consultancy Pty Ltd, 8 February 2006, at http://www.rogerclarke.com/DV/NatIDSchemeElms.html

Clarke R. (2006b) 'What's 'Privacy?' Prepared for a Workshop at the Australian Law Reform Commission on 28 July 2006, Xamax Consultancy Pty Ltd, July 2006, at http://www.rogerclarke.com/DV/Privacy.html

Clarke R. (2006c) 'Make Privacy a Strategic Factor - The Why and the How' Cutter IT Journal 19, 11 (October 2006), at http://www.rogerclarke.com/DV/APBD-0609.html

Clarke R. (2008) 'Dissidentity: The Political Dimension of Identity and Privacy' Identity in the Information Society 1, 1 (December, 2008) 221-228, Preprint at http://www.rogerclarke.com/DV/Dissidentity.html

Clarke R. (2009a) 'A Sufficiently Rich Model of (Id)entity, Authentication and Authorisation' Proc. IDIS 2009 - The 2nd Multidisciplinary Workshop on Identity in the Information Society, LSE, London, 5 June 2009, at http://www.rogerclarke.com/ID/IdModel-090605.html

Clarke R. (2009b) 'A Framework for Surveillance Analysis' Xamax Consultancy Pty Ltd, August 2009, at http://www.rogerclarke.com/DV/FSA.html

Clarke R. (2009c) 'Surveillance in Speculative Fiction: Have Our Artists Been Sufficiently Imaginative?' Xamax Consultancy Pty Ltd, October 2009, at http://www.rogerclarke.com/DV/SSF-0910.html

Clarke R. (2009d) 'The Privacy Attitudes of the iGeneration' Xamax Consultancy Pty Ltd, November 2009, at http://www.rogerclarke.com/DV/MillGen.html

Clarke R. (2010a) 'What is Überveillance? (And What Should Be Done About It?)' IEEE Technology and Society 29, 2 (Summer 2010) 17-25, at http://www.rogerclarke.com/DV/RNSA07.html

Clarke R. (2010b) 'Civil Society Must Publish Standards Documents' Proc. Human Choice & Computers (HCC'10), IFIP World Congress, Brisbane, September 2010, at http://www.rogerclarke.com/DV/CSSD.html

Clarke R. (2011a) 'An Evaluation of Privacy Impact Assessment Guidance Documents' International Data Privacy Law 1, 2 (March 2011), at http://www.rogerclarke.com/DV/PIAG-Eval.html

Clarke R. (2011b) 'Cyborg Rights' IEEE Technology and Society 30, 3 (Fall 2011) 49-57, at http://www.rogerclarke.com/SOS/CyRts-1102.html

Clarke R. (2012) 'The Challenging World of Privacy Advocacy' IEEE Technology & Society, September 2012, at http://www.rogerclarke.com/DV/PAO-12.html

Clarke R. (2013a) 'Vignettes of Corporate Privacy Disasters' Xamax Consultancy Pty Ltd, January 2013, at http://www.rogerclarke.com/DV/PrivCorp.html

Clarke R. (2013b) 'Information Security for Small and Medium-Sized Organisations' Xamax Consultancy Pty Ltd, January 2013, at http://www.xamax.com.au/EC/ISInfo.pdf

Clarke R. (2013c) 'Persona Missing, Feared Drowned: The Digital Persona Concept, Two Decades Later' Information Technology & People 26, 3 (June 2013), at http://www.rogerclarke.com/ID/DP12.html

Clarke R. & Wigan M. (2011) 'You Are Where You've Been The Privacy Implications of Location and Tracking Technologies' Journal of Location Based Services 5, 3-4 (December 2011) 138-155, at http://www.rogerclarke.com/DV/YAWYB-CWP.html

Kirkpatrick M. (2010) 'Facebook's Zuckerberg Says The Age of Privacy is Over' ReadWriteWeb, 9 January 2010, at http://www.readwriteweb.com/archives/facebooks_zuckerberg_says_the_age_of_privacy_is_ov.php

Mann S. (2005) 'Equiveillance: The equilibrium between Sur-veillance and Sous-veillance' Proc. Computers, Freedom & Privacy, Toronto, May 2005, at http://idtrail.org/files/Mann,%2520Equiveillance.pdf

Mann S., Nolan J. & Wellman B. (2003) 'Sousveillance: Inventing and Using Wearable Computing Devices for Data Collection in Surveillance Environments' Surveillance & Society 1, 3 (June 2003) 331-355, at http://www.surveillance-and-society.org/articles1(3)/sousveillance.pdf

Metz C. (2009) 'Google chief: Only miscreants worry about net privacy - 'If you don't want anyone to know, don't do it' The Register, 7 December 2009, at http://www.theregister.co.uk/2009/12/07/schmidt_on_privacy/

Michael M.G. (2006) 'Consequences of Innovation' Unpublished Lecture Notes No. 13 for IACT405/905 - Information Technology and Innovation, School of Information Technology and Computer Science, University of Wollongong, Australia, 2006

Michael K. & Michael M.G. (2006) 'Towards chipification: the multifunctional body art of the net generation' Proc. Conf. Cultural Attitudes Towards Technology and Communication, 28th June - 1st July 2006, Tartu, Estonia, pp. 622-641

Michael K. & Clarke R. (2013) 'Location and Tracking of Mobile Devices: Überveillance Stalks the Streets' Computer Law & Security Review 29, 3 (June 2013), at http://www.rogerclarke.com/DV/LTMD.html

Morison W.L. (1973) 'Report on the Law of Privacy' Govt. Printer, Sydney 1973

OECD (1980) 'Guidelines on the Protection of Privacy and Transborder Flows of Personal Data', Organisation for Economic Cooperation and Development, Paris, 1980, at http://www.oecd.org/internet/ieconomy/oecdguidelinesontheprotectionofprivacyandtransborderflowsofpersonaldata.htm

PAR (2013) 'Privacy Advocates Register' PrivacyAdvocates.ca, at http://privacyadvocates.ca/

Rule J.B. (1974) 'Private Lives and Public Surveillance: Social Control in the Computer Age' Schocken Books, 1974

Rule J.B., McAdam D., Stearns L. & Uglow D. (1980) 'The Politics of Privacy' New American Library 1980

Sprenger P. (1999) 'Sun on Privacy: 'Get Over It' Wired Magazine, 26 January 1999, at http://www.wired.com/politics/law/news/1999/01/17538

Stephenson N. (1995) 'The Diamond Age' Bantam Books, 1995

USDOC (2000) 'Safe Harbor' U.S. Department of Commerce, 2000, at http://export.gov/safeharbor/

Westin A.F. (1967) 'Privacy and Freedom' Atheneum 1967

Westin A.F., Ed. (1971) 'Information Technology in a Democracy' Harvard University Press, Cambridge, Mass., 1971

Westin A.F. & Baker M.A. (1974) 'Databanks in a Free Society: Computers, Record-Keeping and Privacy' Quadrangle 1974

Wright D. (ed.) (2011) 'Privacy Impact Assessments: Engaging stakeholders in protecting privacy'

De Hert P. & Wright D. (eds) (2012) 'Privacy Impact Assessments: Engaging Stakeholders in Protecting Privacy' Springer, 2012

Zamyatin E. (1922) 'We' Penguin, 1922, 1990


Author Affiliations

Roger Clarke is Principal of Xamax Consultancy Pty Ltd, Canberra. He is also a Visiting Professor in the Cyberspace Law & Policy Centre at the University of N.S.W., and a Visiting Professor in the Research School of Computer Science at the Australian National University.



xamaxsmall.gif missing
The content and infrastructure for these community service pages are provided by Roger Clarke through his consultancy company, Xamax.

From the site's beginnings in August 1994 until February 2009, the infrastructure was provided by the Australian National University. During that time, the site accumulated close to 30 million hits. It passed 65 million in early 2021.

Sponsored by the Gallery, Bunhybee Grasslands, the extended Clarke Family, Knights of the Spatchcock and their drummer
Xamax Consultancy Pty Ltd
ACN: 002 360 456
78 Sidaway St, Chapman ACT 2611 AUSTRALIA
Tel: +61 2 6288 6916

Created: 1 May 2013 - Last Amended: 8 May 2013 by Roger Clarke - Site Last Verified: 15 February 2009
This document is at www.rogerclarke.com/DV/DV13.html
Mail to Webmaster   -    © Xamax Consultancy Pty Ltd, 1995-2022   -    Privacy Policy