Research Use of Personal Data

Roger Clarke

Principal, Xamax Consultancy Pty Ltd, Canberra

Visiting Fellow, Department of Computer Science, Australian National University

Comments to a Panel Session of the National Scholarly Communications Forum on 'Privacy: Balancing the Needs of Researchers and the Individual's Right to Privacy under the New Privacy Laws', Australian Archives, 9 August 2002

Version of 9 August 2002

© Xamax Consultancy Pty Ltd, 2002

This document is at



I earn my income as a consultant, primarily in the eBusiness arena. The reasons that I've been invited to comment on this topic are that I spent a decade as a senior academic (and continue in roles as a Visiting Fellow and Visiting Professor, and as a supervisor of postgraduate research), and that I've been a public interest advocate since 1972, particularly in relation to information privacy matters.

I'll make a few comments about researchers' interests, then a few about the privacy interest. That will be followed by some points about privacy laws, and some conclusions.

Researchers' Interests

Researchers commonly make some presumptions, which are often presumptive, and in many cases are downright presumptuous.

The first presumption they make is that the research that they want to conduct is of earth-shattering importance. That's occasionally true, but usually false.

The next presumptions they commonly make are that data ought to be available to enable them to perform their research, and that it should be available over the long term. This needs to be examined a little more closely.

During the twentieth century, organisations have become larger and more remote from the people they deal with, and have increasingly applied rational forms of managerialism. These tendencies have resulted in far less dependence on personal contact, and much more reliance on data. Far more data trails have been created, they have been made much more intensive, and multiple parts of a person's life have been inter-related much more than was previously the case. Since the early 1950s, information technology has been applied increasingly intensively in support of data management (Clarke 1988).

During the last two decades of the twentieth century, technology was also applied with vigour to data capture. Beyond mere turnaround documents that enable transaction data to be acquired more cheaply, people have been enveigled into bearing the costs of data capture, for example at ATMs, on touch-pad telephones, and more recently in web-browsers. A current explosion is in person location and tracking technologies, such as much more accurate location methods for cellular phones and other mobile devices, and monitoring of motor vehicle movements (Clarke 1999).

Researchers are, without doubt, going to expect these vast collections of very finely detailed information about human behaviour, to be available to their microscopes. Moreover, many researchers are not going to be satisfied with unidentified data, but will demand, as of right, the ability to identify the individuals, and inter-relate multiple streams of data about the same person.

It's agreed that some small percentage of research activity may be demonstrably of such great social benefit that individual interests need to be over-ridden in favour of the public good. But every researcher is likely to make claims of this kind. Are 'ethics committees' really credible mechanisms to ensure that the vast majority of claims are denied, and only the select few enabled; or are they merely auto-approval processes?

Privacy Interests

The privacy interest has been well-documented, and yet it is still amazingly poorly understood. It's only for people who have something to hide; but that's everybody. People want to deny information to other people, and especially to powerful and untrustworthy corporations and government agencies. They want to deny information in the short term, and the medium term, and even the long term. Some privacy interests even extend beyond the person's death.

It's challenging to establish reasonable guidelines for the expiry of privacy interests. Consider these three vignettes:

A previous panellist (the Australian Privacy Foundation's Tim Dixon) referred to the Privacy Commissioner's surveys of privacy attitudes, and the high levels of concern that they indicated. What needs to be borne in mind is that those figures dramatically understate the level of public concern. The refusal rates on telephone surveys generally, are extraordinarily high. In the case of that privacy survey, only one-fifth of people called actually participated. What does an 80% refusal rate do to non-response bias?? And if the analyst makes reasonable presumptions about the privacy attitudes of the non-respondents, what does it do to the statistics that describe privacy attitudes??

Back in the late 1980s, I warned that the privacy-intrusive practices of government agencies and corporations created enormous risks for data quality. I argued that we were, even then, training people to omit data, falsify data, provide misleading data, and use multiple identities. We're now in an era when every privacy advocate has to recommend to people that they think before they yield up any data to any organisation, and refuse to do so whenever they consider it inappropriate. Responses to the Privacy Commissioner's survey show that the public doesn't need privacy advocates to tell them that. They worked it out all by themselves.

Privacy Laws

But surely, one might ask, hasn't all of this been reflected in the privacy laws that are mushrooming around the country? To answer that requires a quick review of the relevant history.

All of the world's privacy legislation has been enacted within the 'fair information practices' (FIPs) paradigm. This was an approach developed by Alan Westin at Columbia, a servant of big business and big government at the time, and ever since. It was devised to be a minimum response, which would ward off the public and provide the appearance of protecting privacy, but which would have the least practicable impact on business. FIPs was developed in the late 1960s, implemented in the 1970s, and codified in the OECD Guidelines of 1980 (Clarke 1989). That document very explicitly states that its purpose is to facilitate international business, not to protect privacy. The FIPs approach has been refined and extended in Europe, but the United States, through the agency nominally responsible for consumer protection, the Federal Trade Commission (FTC), has actually sought to reduce those limited rights still further.

The FIPs model was seriously deficient when it was established, and, moreover, the OECD Guidelines reflect technology of the 1970s. It is vital to public trust in the organisations that people have to deal with that a new philosophy be adopted, and a new round of legislation begun.

Organisations must be required to justify to the public the systems that they want to implement, and the features those systems are intended to contain. Powerful technologies must be subject to not only environmental impact statements, but also social impact statements, with refusals, conditional licences, enforcement, and gaol terms for abuse. Multiple identities and identifiers, anonymity, and pseudonymity must be sustained as the norm, and new procedures and privacy-enhancing technologies stimulated to protect them. Application of biometrics and DNA to identification and identity authentication must be banned until they are understood and subjected to tight regulation. Privacy protections must be comprehensive, covering privacy of the physical person, of personal behaviour, of personal communications, as well as of personal data. See Clarke (2000), or for a shorter version Clarke (2001).

You're welcome to see these as just plaintive cries, by an aging pinko radical, against dehumanising bureaucracies and technologies. But ponder whether such a change might be necessary, in order to avoid the collapse of the claimed legitimacy of government authority, and of consumers' preparedness to deal honestly with manipulative and downright cheating corporations, whose sole legislated responsibility is to 'maximise shareholder value'. The public has countermeasures at its disposal, and an increasing proportion of the population is using them.

Returning to the workaday world, Australia acceded to the OECD Guidelines in 1984, but failed to implement even the most minimal legislation until 1988. Moreover, that statute was passed only after the Labor Government had failed to gain public acceptance of its Australia Card, and as a condition of the Opposition's acceptance of the enhanced Tax File Number. No State enacted FIPs-style legislation until 1998, and even now only two States have statutes in place.

What's even worse is that the N.S.W. statute of 1998 is the world's worst implementation of the OECD Guidelines, racked with omissions, exceptions and exemptions. Since then, Victoria has enacted two statutes, the first of which establishes orthodox, OECD-style privacy regulation of the State's private sector, but the second of which has as its primary purpose the authorisation of a vast array of extraneous uses of health care data.

The most scurrilous of the legislation is, however, the Commonwealth's Privacy Amendment Act of 2000, which purports to regulate the Australian private sector. It is, quite simply, an anti-privacy statute, devised by the most reactionary and anti-liberties Attorney-General this country has ever endured, and generally declines to consult with the public, and when he does simply ignores the advice that he's offered. The 2000 amendment legitimises a wide array of behaviours by corporations that are opposed by the public, it is designed to prevent such limited protections as it provides from being enforced, and it was so disastrously badly drafted that it is now 100 pages of confused and confusing legalese that makes no-one but the lawyers happy (Clarke 2000).

Far from moving beyond the 1970s, the two alternative parties of government in Australia are in denial, one enslaved by the rationalist economists' utterly discredited justification of American corporate philosophy, and the other stuck with the dictum of the architect of OzCard.htmlthe Australia Card, Neal Blewett, that 'privacy is a bourgeois right'.


On the one hand, I would argue that researchers need to be seriously constrained in regard to their access to personal data, because the vast majority of research is not important enough to justify the breaches of privacy that are involved.

On the other, privacy protections are so trivial, generally, but especially in Australia, that research seems unlikely to be greatly hamstrung. Universities in N.S.W., Victoria and the two Territories are subject to confused, confusing and toothless legislation. Universities elsewhere are subject to very little. There are in any case so many intentional exceptions and exemptions, so many loop-holes, so many ambiguities, and there is so little prospect of retribution, that 'ethics committees' can, with considerable confidence, continue to approve all but the most grossly embarrassing proposals.

After 30 years as a privacy advocate, and almost as long as a researcher, it would be nice to be able to conclude that our society has reached a level of maturity, and that principles and processes for determining reasonable balances between the interests of research and privacy are well-understood. But unfortunately, it simply isn't so.


Go to Roger's Home Page.

Go to the contents-page for this segment.

Send an email to Roger

Created: 9 August 2002

Last Amended: 9 August 2002

These community service pages are a joint offering of the Australian National University (which provides the infrastructure), and Roger Clarke (who provides the content).
The Australian National University
Visiting Fellow, Faculty of
Engineering and Information Technology,
Information Sciences Building Room 211
Xamax Consultancy Pty Ltd, ACN: 002 360 456
78 Sidaway St
Chapman ACT 2611 AUSTRALIA
Tel: +61 2 6288 1472, 6288 6916