Roger Clarke's Web-Site© Xamax Consultancy Pty Ltd, 1995-2025 |
![]() |
|||||
HOME | eBusiness |
Information Infrastructure |
Dataveillance & Privacy |
Identity Matters | Other Topics | |
What's New |
Waltzing Matilda | Advanced Site-Search |
Revision of 10 February 2019
(Insertion of 2 and 3, Expansion of 9,
Re-formatting; Insertion of 1; Insertion of 17; Insertion of 18)
Prepared in support of Guidelines for the Responsible Business Use of AI
© Xamax Consultancy Pty Ltd, 2018-19
Available under an AEShareNet licence or a Creative
Commons
licence.
This document is at http://www.rogerclarke.com/EC/GAIP.html
IT suppliers, and business and government user organisations, are terrified by the prospect of regulation constraining their activities. Recent claims by purveyors of 'Artificial Intelligence' (AI) notions have been met with widespread revulsion from the public. In an endeavour to calm the public's nerves, a wide variety of organisations have rapidly published 'principles' and 'guidelines' which those organisations claim will mitigate the harm that AI will cause (or would cause, if AI actually delivers on its promises this time around).
On the one hand, collections of 'principles' and 'guidance' will do little or nothing to exercise control over AI research, development and deployment, because they are merely window-dressing. For example:
On the other hand, many of these documents have been developed by well-resourced organisations that have access to researchers, developers and implementors of various AI technologies. Great care must be taken to appreciate sub-texts, to consider why statements are framed as they are, to understand the effects of qualifying words, and to identify aspects that are entirely missing. Provided that appropriate scepticism is brought to the activity, however, there is value to be extracted from these documents.
Rometty said it's important for people to develop trust in an AI system. For IBM, the purpose of AI will be to aid humans, not replace them. "We say cognitive, not AI, because we are augmenting intelligence," Rometty said. "For most of our businesses and companies, it will not be man or machine... it will be a symbiotic relationship. Our purpose is to augment and really be in service of what humans do."
You must be clear as you build AI platforms how they are trained, and what data was used in training. "The human needs to remain in control of the system," Rometty said. These systems will not have self-awareness or consciousness, she added.
And industry domain matters, Rometty added. With Watson, institutions can combine their decades of knowledge with industry data. "These systems will be most effective when trained with domain knowledge in an industry context," Rometty said.
AI platforms must be built with people in the industry, be they doctors, teachers, or underwriters. And companies must prepare to train human workers on how to use these tools to their advantage.
For example, Watson's oncology advisor is now rolling out in India, China, Thailand, Finland, and the Netherlands. It was trained by the world's best oncologists, IBM claims. "You get this reach when those principles are followed, and that to me is the great promise," Rometty said. "The reason this is worth fighting so strongly to roll out right is you can really solve problems. India has one oncologist for 1,600 patients."
1) Research Goal: The goal of AI research should be to create not undirected intelligence, but beneficial intelligence.
2) Research Funding: Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:
3) Science-Policy Link: There should be constructive and healthy exchange between AI researchers and policy-makers.
4) Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of AI.
5) Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards.
6) Safety: AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.
7) Failure Transparency: If an AI system causes harm, it should be possible to ascertain why.
8) Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.
9) Responsibility: Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.
10) Value Alignment: Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.
11) Human Values: AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.
12) Personal Privacy: People should have the right to access, manage and control the data they generate, given AI systems' power to analyze and utilize that data.
13) Liberty and Privacy: The application of AI to personal data must not unreasonably curtail people's real or perceived liberty.
14) Shared Benefit: AI technologies should benefit and empower as many people as possible.
15) Shared Prosperity: The economic prosperity created by AI should be shared broadly, to benefit all of humanity.
16) Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.
17) Non-subversion: The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.
18) AI Arms Race: An arms race in lethal autonomous weapons should be avoided.
19) Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.
20) Importance: Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.
21) Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.
22) Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.
23) Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.
(1) Governance frameworks, including standards and regulatory bodies, should be established to oversee processes assuring that the use of A/IS does not infringe upon human rights, freedoms, dignity, and privacy, and of traceability to contribute to the building of public trust in A/IS.
(2) A way to translate existing and forthcoming legal obligations into informed policy and technical considerations is needed. Such a method should allow for differing cultural norms as well as legal and regulatory frameworks.
(3) For the foreseeable future, A/IS should not be granted rights and privileges equal to human rights, A/IS should always be subordinate to human judgment and control.
A/IS should prioritize human well-being as an outcome in all system designs, using the best available, and widely accepted, well-being metrics as their reference point. [The discussion appears to be primarily concerned with economic wellbeing]
(1) Legislatures/courts should clarify issues of responsibility, culpability, liability, and accountability for A/IS where possible during development and deployment (so that manufacturers and users understand their rights and obligations).
(2) Designers and developers of A/IS should remain aware of, and take into account when relevant, the diversity of existing cultural norms among the groups of users of these A/IS.
(3) Multi-stakeholder ecosystems should be developed to help create norms (which can mature to best practices and laws) where they do not exist ... (including representatives of civil society, law enforcement, insurers, manufacturers, engineers, lawyers, etc.).
(4) Systems for registration and record-keeping should be created so that it is always possible to find out who is legally responsible for a particular A/IS. Manufacturers/operators/ owners of A/IS should register key, high-level parameters, including Training data/training environment (if applicable), Sensors/real world data sources, Algorithms, Process graphs, Model features (at various levels), User interfaces, Actuators/outputs, Optimization goal/loss function/reward function
Develop new standards that describe measurable, testable levels of transparency, so that systems can be objectively assessed and levels of compliance determined. For designers, such standards will provide a guide for self-assessing transparency during development and suggest mechanisms for improving transparency.
Minimize the risks of misuse of A/IS by raising public awareness, providing ethics education, and educating government, lawmakers and enforcement agencies [but with no mention of obligations, sanctions and enforcement]
A WEF document claims that these "core principles" derive from a report commissioned by the House of Lords AI Select Committee, which is based on evidence from over 200 industry experts - most of whom presumably has at least a degree of self-interest in the outcome.
The first principle argues that AI should be developed for the common good and benefit of humanity.
The report's authors argue the United Kingdom must actively shape the development and utilisation of AI, and call for "a shared ethical AI framework" that provides clarity against how this technology can best be used to benefit individuals and society.
They also say the prejudices of the past must not be unwittingly built into automated systems, and urge that such systems "be carefully designed from the beginning, with input from as diverse a group of people as possible".
The second principle demands that AI operates within parameters of intelligibility and fairness, and calls for companies and organisations to improve the intelligibility of their AI systems.
"Without this, regulators may need to step in and prohibit the use of opaque technology in significant and sensitive areas of life and society", the report warns.
Third, the report says artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
It says the ways in which data is gathered and accessed need to be reconsidered. This, the report says, is designed to ensure companies have fair and reasonable access to data, while citizens and consumers can also protect their privacy.
"Large companies which have control over vast quantities of data must be prevented from becoming overly powerful within this landscape. We call on the government ... to review proactively the use and potential monopolisation of data by big technology companies operating in the UK".
The fourth principle stipulates all people should have the right to be educated as well as be enabled to flourish mentally, emotionally and economically alongside artificial intelligence.
For children, this means learning about using and working alongside AI from an early age. For adults, the report calls on government to invest in skills and training to negate the disruption caused by AI in the jobs market.
Fifth, and aligning with concerns around killer robots, the report says the autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.
"There is a significant risk that well-intended AI research will be misused in ways which harm people," the report says. "AI researchers and developers must consider the ethical implications of their work".
Advances in AI have the potential to improve outcomes, enhance quality, and reduce costs in such safety-critical areas as healthcare and transportation. Effective and careful applications of pattern recognition, automated decision making, and robotic systems show promise for enhancing the quality of life and preventing thousands of needless deaths.
However, where AI tools are used to supplement or replace human decision-making, we must be sure that they are safe, trustworthy, and aligned with the ethics and preferences of people who are influenced by their actions.
We will pursue studies and best practices around the fielding of AI in safety-critical application areas.
AI has the potential to provide societal value by recognizing patterns and drawing inferences from large amounts of data. Data can be harnessed to develop useful diagnostic systems and recommendation engines, and to support people in making breakthroughs in such areas as biomedicine, public health, safety, criminal justice, education, and sustainability.
While such results promise to provide real benefits, we need to be sensitive to the possibility that there are hidden assumptions and biases in data, and therefore in the systems built from that data - in addition to a wide range of other system choices which can be impacted by biases, assumptions, and limits. This can lead to actions and recommendations that replicate those biases, and have serious blind spots.
Researchers, officials, and the public should be sensitive to these possibilities and we should seek to develop methods that detect and correct those errors and biases, not replicate them. We also need to work to develop systems that can explain the rationale for inferences.
We will pursue opportunities to develop best practices around the development and fielding of fair, explainable, and accountable AI systems.
AI advances will undoubtedly have multiple influences on the distribution of jobs and nature of work. While advances promise to inject great value into the economy, they can also be the source of disruptions as new kinds of work are created and other types of work become less needed due to automation.
Discussions are rising on the best approaches to minimizing potential disruptions, making sure that the fruits of AI advances are widely shared and competition and innovation are encouraged and not stifled. We seek to study and understand best paths forward, and play a role in this discussion.
A promising area of AI is the design of systems that augment the perception, cognition, and problem-solving abilities of people. Examples include the use of AI technologies to help physicians make more timely and accurate diagnoses and assistance provided to drivers of cars to help them to avoid dangerous situations and crashes.
Opportunities for R&D and for the development of best practices on AI-human collaboration include methods that provide people with clarity about the understandings and confidence that AI systems have about situations, means for coordinating human and AI contributions to problem solving, and enabling AI systems to work with people to resolve uncertainties about human goals.
AI advances will touch people and society in numerous ways, including potential influences on privacy, democracy, criminal justice, and human rights. For example, while technologies that personalize information and that assist people with recommendations can provide people with valuable assistance, they could also inadvertently or deliberately manipulate people and influence opinions.
We seek to promote thoughtful collaboration and open dialogue about the potential subtle and salient influences of AI on people and society.
AI offers great potential for promoting the public good, for example in the realms of education, housing, public health, and sustainability. We see great value in collaborating with public and private organizations, including academia, scientific societies, NGOs, social entrepreneurs, and interested private citizens to promote discussions and catalyze efforts to address society's most pressing challenges.
Some of these projects may address deep societal challenges and will be moonshots - ambitious big bets that could have far-reaching impacts. Others may be creative ideas that could quickly produce positive results by harnessing AI advances.
We will assess AI applications in view of the following objectives. We believe that AI should:
The expanded reach of new technologies increasingly touches society as a whole. Advances in AI will have transformative impacts in a wide range of fields, including healthcare, security, energy, transportation, manufacturing, and entertainment. As we consider potential development and uses of AI technologies, we will take into account a broad range of social and economic factors, and will proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides.
AI also enhances our ability to understand the meaning of content at scale. We will strive to make high-quality and accurate information readily available using AI, while continuing to respect cultural, social, and legal norms in the countries where we operate. And we will continue to thoughtfully evaluate when to make our technologies available on a non-commercial basis.
AI algorithms and datasets can reflect, reinforce, or reduce unfair biases. We recognize that distinguishing fair from unfair biases is not always simple, and differs across cultures and societies. We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.
We will continue to develop and apply strong safety and security practices to avoid unintended results that create risks of harm. We will design our AI systems to be appropriately cautious, and seek to develop them in accordance with best practices in AI safety research. In appropriate cases, we will test AI technologies in constrained environments and monitor their operation after deployment.
We will design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal. Our AI technologies will be subject to appropriate human direction and control.
We will incorporate our privacy principles in the development and use of our AI technologies. We will give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data.
Technological innovation is rooted in the scientific method and a commitment to open inquiry, intellectual rigor, integrity, and collaboration. AI tools have the potential to unlock new realms of scientific research and knowledge in critical domains like biology, chemistry, medicine, and environmental sciences. We aspire to high standards of scientific excellence as we work to progress AI development.
We will work with a range of stakeholders to promote thoughtful leadership in this area, drawing on scientifically rigorous and multidisciplinary approaches. And we will responsibly share AI knowledge by publishing educational materials, best practices, and research that enable more people to develop useful AI applications.
Many technologies have multiple uses. We will work to limit potentially harmful or abusive applications. As we develop and deploy AI technologies, we will evaluate likely uses in light of the following factors:
In addition to the above objectives, we will not design or deploy AI in the following application areas:
We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas. These include cybersecurity, training, military recruitment, veteransâ healthcare, and search and rescue. These collaborations are important and we'll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe.
o--o--o--o--o--o--o--o
Google's announcement was met with immediate scepticism (Newcomer 2018): ""[With the exception of not working on "technologies whose principal purpose or implementation is to cause or directly facilitate injury to people], the rest of the company's "principles" are peppered with lawyerly hedging and vague commitments ... Without promising independent oversight, Google is just putting a new, less persuasive, spin on an old principle it's tried to bury: 'Don't be evil'".
We're excited about the opportunities that AI brings to people and its ability to help us achieve more. But it's also important to us that we build upon an ethical foundation. We believe that AI technology should embody the following four principles:
New developments in Artificial Intelligence are transforming the world, from science and industry to government administration and finance. The rise of AI decision-making also implicates fundamental rights of fairness, accountability, and transparency. Modern data analysis produces significant outcomes that have real life consequences for people in employment, housing, credit, commerce, and criminal sentencing. Many of these techniques are entirely opaque, leaving individuals unaware whether the decisions were accurate, fair, or even about them.
We propose these Universal Guidelines to inform and improve the design and use of AI. The Guidelines are intended to maximize the benefits of AI, to minimize the risk, and to ensure the protection of human rights. These Guidelines should be incorporated into ethical standards, adopted in national law and international agreements, and built into the design of systems. We state clearly that the primary responsibility for AI systems must reside with those institutions that fund, develop, and deploy these systems.
Ethicality of Purpose is driven by the EU Charter of Fundamental Rights:
E1 Beneficence: Do Good
E2 Non maleficence: Do no Harm
E3 Autonomy: Preserve Human Agency
E4 Justice: Be Fair
E5 Explicability: Operate transparently
Achieving Trustworthy AI means that the general and abstract principles need to be mapped into concrete requirements for AI systems and applications. The ten requirements listed below have been derived from the rights, principles and values of Chapter I. While they are all equally important, in different application domains and industries, the specific context needs to be taken into account for further handling thereof.
P0 Perform impact assessment (p.28)
P1 Accountability
P2 Data Governance
P3 Design for all
P4 Governance of AI Autonomy (Human oversight)
P5 Non-Discrimination
P6 Respect for (& Enhancement of) Human Autonomy [in final section, no. 7]
P7 Respect for Privacy [in final section, no. 6]
P8 Robustness
P9 Safety
P10 Transparency
ACM (2017) 'Statement on Algorithmic Transparency and Accountability' Association for Computing Machinery, January 2017, at https://www.acm.org/binaries/content/assets/public-policy/2017_usacm_statement_algorithms.pdf
Asimov I. (1942) 'Runaround' (originally published in 1942), reprinted in Asimov I. 'I, Robot' Grafton Books, London, 1968, pp. 33- 51
Bolter J.D. (1986) 'Turing's Man: Western Culture in the Computer Age' The North Carolina University Press, 1984; Pelican, 1986
BS (2016) 'Robots and robotic devices. Guide to the ethical design and application of robots and robotic systems' British Standards Institute, 2016
Clarke R. (1989) 'Knowledge-Based Expert Systems: Risk Factors and Potentially Profitable Application Area', Xamax Consultancy Pty Ltd, January 1989, at http://www.rogerclarke.com/SOS/KBTE.html
Clarke R. (1993-94) 'Asimov's Laws of Robotics: Implications for Information Technology' In two parts, in IEEE Computer 26,12 (December 1993) 53-61, and 27,1 (January 1994) 57-66, at http://www.rogerclarke.com/SOS/Asimov.html
Clarke R. (2005) 'Human-Artefact Hybridisation: Forms and Consequences' Proc. Ars Electronica 2005 Symposium on Hybrid - Living in Paradox, Linz, Austria, 2-3 September 2005, PrePrint at http://www.rogerclarke.com/SOS/HAH0505.html
CLA-EP (2016) 'Recommendations on Civil Law Rules on Robotics' Committee on Legal Affairs of the European Parliament, 31 May 2016, at http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//NONSGML%2BCOMPARL%2BPE-582.443%2B01%2BDOC%2BPDF%2BV0//EN
Devlin H. (2016). 'Do no harm, don't discriminate: official guidance issued on robot ethics' The Guardian, 18 Sep 2016, at https://www.theguardian.com/technology/2016/sep/18/official-guidance-robot-ethics-british-standards-institute
EC (2018) 'Draft Ethics guidelines for trustworthy AI' European Commission, 18 December 2018, at https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=57112
FLI (2017) 'Asilomar AI Principles' Future of Life Institute, January 2017, at https://futureoflife.org/ai-principles/?cn-reloaded=1
GEFA (2016) 'Position on Robotics and AI' The Greens / European Free Alliance Digital Working Group, November 2016, at https://juliareda.eu/wp-content/uploads/2017/02/Green-Digital-Working-Group-Position-on-Robotics-and-Artificial-Intelligence-2016-11-22.pdf
Google (2018) 'Objectives for AI applications' Google, June 2018, at https://www.blog.google/technology/ai/ai-principles/
Hirano (2017) 'AI R&D guidelines' Proc. OECD Conf. on AI developments and applications, October 2017, http://www.oecd.org/going-digital/ai-intelligent-machines-smart-policies/conference-agenda/ai-intelligent-machines-smart-policies-hirano.pdf
HOL (2018) 'AI in the UK: ready, willing and able?' Select Committee on Artificial Intelligence, House of Lords, April 2018, at https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf
ICO (2017) 'Big data, artificial intelligence, machine learning and data protection' UK Information Commissioner's Office, Discussion Paper v.2.2, September 2017, at https://ico.org.uk/for-organisations/guide-to-data-protection/big-data/
IEEE (2017) 'Ethically Aligned Design', Version 2. IEEE, December 2017. at http://standards.ieee.org/develop/indconn/ec/autonomous_systems.html
ISOC (2017) 'Artificial Intelligence and Machine Learning: Policy Paper' Internet Society, April 2017, at https://www.internetsociety.org/resources/doc/2017/artificial-intelligence-and-machine-learning-policy-paper/
ITIC (2017) 'AI Policy Principles' Information Technology Industry Council, undated but apparently of October 2017, at https://www.itic.org/resources/AI-Policy-Principles-FullReport2.pdf
MS (2018) 'Microsoft AI principles' Microsoft, August 2018, at https://www.microsoft.com/en-us/ai/our-approach-to-ai
Newcomer E. (2018). 'What Google's AI Principles Left Out: We're in a golden age for hollow corporate statements sold as high-minded ethical treatises' Bloomberg, 8 June 2018, at https://www.bloomberg.com/news/articles/2018-06-08/what-google-s-ai-principles-left-out
Pichai S. (2018) 'AI at Google: our principles' Google Blog, 7 Jun 2018, at https://www.blog.google/technology/ai/ai-principles/
PoAI (2018) 'Our Work (Thematic Pillars)' Partnership on AI, April 2018, at https://www.partnershiponai.org/about/#pillar-1
Rayome A.D. (2017) 'Guiding principles for ethical AI, from IBM CEO Ginni Rometty', TechRepublic (17 January 2017), at https://www.techrepublic.com/article/3-guiding-principles-for- ethical-ai-from-ibm-ceo-ginni-rometty/
Smith R. (2018). '5 core principles to keep AI ethical'. World Economic Forum, 19 Apr 2018, at https://www.weforum.org/agenda/2018/04/keep-calm-and-make-ai-ethical/
TPV (2018) 'Universal Guidelines for Artificial Intelligence' The Public Voice, October 2018, at https://thepublicvoice.org/ai-universal-guidelines/
UGU (2017) 'Top 10 Principles for Ethical AI' UNI Global Union, December 2017, at http://www.thefutureworldofwork.org/media/35420/uni_ethical_ai.pdf
Wyndham J. (1932) 'The Lost Machine' (originally published in 1932), reprinted in A. Wells (Ed.) 'The Best of John Wyndham' Sphere Books, London, 1973, pp. 13- 36, and in Asimov I., Warrick P.S. & Greenberg M.H. (Eds.) 'Machines That Think' Holt, Rinehart, and Wilson, 1983, pp. 29-49
Roger Clarke is Principal of Xamax Consultancy Pty Ltd, Canberra. He is also a Visiting Professor in Cyberspace Law & Policy at the University of N.S.W., and a Visiting Professor in the Research School of Computer Science at the Australian National University. He has also spent many years on the Board of the Australian Privacy Foundation, and is Company Secretary of the Internet Society of Australia.
Personalia |
Photographs Presentations Videos |
Access Statistics |
![]() |
The content and infrastructure for these community service pages are provided by Roger Clarke through his consultancy company, Xamax. From the site's beginnings in August 1994 until February 2009, the infrastructure was provided by the Australian National University. During that time, the site accumulated close to 30 million hits. It passed 75 million in late 2024. Sponsored by the Gallery, Bunhybee Grasslands, the extended Clarke Family, Knights of the Spatchcock and their drummer |
Xamax Consultancy Pty Ltd ACN: 002 360 456 78 Sidaway St, Chapman ACT 2611 AUSTRALIA Tel: +61 2 6288 6916 |
Created: 11 July 2018 - Last Amended: 10 February 2019 by Roger Clarke - Site Last Verified: 15 February 2009
This document is at www.rogerclarke.com/EC/GAIP-190210.html
Mail to Webmaster - © Xamax Consultancy Pty Ltd, 1995-2024 - Privacy Policy