Technology Regulation: Risk-based approaches to Artificial Intelligence governance, Part 1

Prateek Sibal
Prateek Sibal
Published in
7 min readJun 18, 2021

--

Post authored by Prateek Sibal

In five years, between 2015 and 2020, 117 initiatives have published AI ethics principles worldwide. Despite a skewed geographical scope, with 91 of these initiatives emerging in Europe and North America, the proliferation of such initiatives on AI ethics principles paves the way for building global consensus on AI governance. Notably, the 37 OECD Member States have adopted the OECD AI Recommendation, the G20 has endorsed these principles, and the Global Partnership on AI is operationalising them. In the UN system, the United Nations Educational, Scientific and Cultural Organization (UNESCO) is developing a Recommendation on the Ethics of AI that 193 countries may adopt in 2021.

An analysis of different principles reveals a high-level consensus around eight themes: privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, professional responsibility, and promotion of human values. At the same time, ethical principles are criticised for lacking enforcement mechanisms. Companies often commit to AI ethics principles to improve their public image with little follow-up on implementing them; an exercise termed as “ ethics washing”. Evidence also suggests that knowledge of the ethical tenets has little or no effect on whether software engineers factor in ethical principles in developing products or services.

Defining principles is essential, but it is only the first step for ethical AI governance. There is a need for mid-level norms, standards and guidelines at the international level that may inform regional or national regulation to translate principles into practice. This two-part blog will discuss the need for AI governance to evolve past the ‘ethics formation stage’ into concrete and tangible steps such as developing technical benchmarks and adopting risk-based regulation for AI systems.

Part one of the blog has three sections. The first section discusses some of the technical advances in AI technologies in recent years. These advances have led to new commercial applications with some potentially adverse social implications. Section two discusses the challenges of AI governance and presents a framework for mitigating the adverse implications of technology on society. Finally, section three discusses the role of technical benchmarks for evaluating AI systems. Part two of the blog will contain further discussion on risk assessment approaches to help identify the AI applications and contexts that need to be regulated. It will also discuss the next steps for national initiatives for AI governance.

The blog follows the definition of an AI system proposed by the OECD’s AI Experts Group. They describe an AI system as a “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments. It uses machine or human-based inputs to perceive real or virtual environments, abstract such perceptions into models (in an automated manner, e.g. with ML or manually), and use model inference to formulate options for information or action. AI systems are designed to operate with varying levels of autonomy.”

Recent Advances in AI Technologies

Artificial Intelligence is developing rapidly. It is important to lay down a broad overview of AI developments, which may have profound and potentially adverse impacts on individuals and society. The 2021 AI Index report notes four crucial technical advances that hastened the commercialisation of AI technologies:

  • AI-Generated Content: AI systems can generate high-quality text, audio and visual content to a level that it is difficult for humans to distinguish between synthetic and non-synthetic content.
  • Image Processing: Computer vision, a branch of computer science that “works on enabling computers to see, identify and process images in the same way that human vision does, and then provide appropriate output”, has seen immense progress in the past decade and is fast industrialising in applications that include autonomous vehicles.
  • Language Processing: Natural Language Processing (NLP) is a branch of computer science “concerned with giving computers the ability to understand the text and spoken words in much the same way human beings can”. NLP has advanced such that AI systems with language capabilities now have meaningful economic impact through live translations, captioning, and virtual voice assistants.
  • Healthcare and biology:DeepMind’s AlphaFold solved the decades-old protein folding problem using machine learning techniques. This breakthrough will allow the study of protein structure and will contribute to drug discovery.

These technological advances have social implications. For instance, the technology generating synthetic faces has rapidly improved. As shown in Figure 1, in 2014, AI systems produced grainy faces, but by 2017, they were generating realistic synthetic faces. Such AI systems have led to the proliferation of ‘deepfake’ pornography that overwhelmingly targets women and has the potential to erode people’s trust in information and videos they encounter online. Some actors misuse the deepfake technology to spread online disinformation, resulting in adverse implications for democracy and political stability. Such developments have made AI governance a pressing matter.

Challenges of AI Governance

In this blog, AI governance is understood as the development and application by governments, the private sector, and civil society, in their respective roles, of shared principles, norms, rules, decision-making procedures, and programmes that shape AI’s evolution and use. As highlighted in the previous section, the rapid advancements in the field of AI technologies have brought the need for better AI governance to the forefront.

In thinking about AI governance, a conundrum that preoccupies many governments worldwide concerns enactment of regulation that does not stifle innovation while also providing adequate safeguards to protect human rights and fundamental freedoms.

Technology regulation is complicated because until a technology has been extensively developed and widely used, its impact on society is difficult to predict. However, once it is deeply entrenched and its effect on society is understood better, it becomes more challenging to regulate the technology. This tension between free and unimpeded technology development and regulating adverse implications is termed the Collingridge dilemma.

David Collingridge, the author of the Social Control of Technologies, noted that when regulatory decisions have to be made under ignorance of technologies’ social impact, continuous monitoring of the impact of technology on society can help correct unexpected consequences early. Collingridge’s guidelines for decision-making under ignorance can inform AI governance as well. These include choosing technology options with:

  • Low failure costs: Selecting options with low error costs, i.e. if a policy or regulation fails to achieve its intended objective, the costs associated with failure are limited.
  • Quicker to correct: Selecting technologies with low response time for correction after the discovery of unanticipated problems.
  • Low cost of applying remedy: Selecting solutions with low cost of applying the remedy, i.e. options with a low fixed cost and a higher variable cost, should be given preference over the ones with a higher fixed cost, and
  • Continuous monitoring: Cost-effective and efficient monitoring can ensure the discovery of unpredicted consequences quickly.

For instance, the requirements around transparency in AI systems provide information for monitoring the impact of AI systems on society. Similarly, risk assessments of AI systems offer a pre-emptive form of oversight over technology development and use, which can help minimise potential social harms.

Technical benchmarks for evaluating AI systems

To address ethical problems related to bias, discrimination, lack of transparency, and accountability in algorithmic decision-making, quantitative benchmarks to assess AI systems’ performance against these ethical principles are needed.

The Institute of Electrical and Electronics Engineers (IEEE), through its Global Initiative on Ethics of Autonomous and Intelligent Systems, is developing technical standards, including on bias in AI systems. They describe “specific methodologies to help users certify how they worked to address and eliminate issues of negative bias in the creation of their algorithms”. Similarly, in the United States, the National Institute of Standards and Technology (NIST) is developing standards for explainable AI based on principles that call for AI systems to provide reasons for their outputs in a manner that is understandable to individual users, explain the process used for generating the output, and deliver their decision only when the AI system is fully confident.

For example, there is significant progress in introducing benchmarks for the regulation of facial recognition technology. Facial recognition systems have a large commercial market. They and used for various tasks, including law enforcement and border controls. These tasks involve detecting visa photos, matching photos in criminal databases, and child abuse images. Such facial recognition systems have been the cause of significant concern due to high error rates in detecting faces and impinging on human rights. Biases in such systems have adverse consequences for individuals denied entry at borders or wrongfully incarcerated. In the United States, the National Institute of Standards and Technology’s Face Recognition Vendor Test provides a benchmark to compare different commercially available facial recognition systems’ performance by operating their algorithms on different image datasets.

The progress in defining benchmarks for ethical principles needs to be complemented by risk assessments of AI systems to pre-empt potentially adverse social impact in line with the Collingridge Dilemma discussed in the previous section. Risk assessments allow the categorisation of AI applications by their risk ratings. They can help develop risk-proportionate regulation for AI systems instead of blanket rules that may place an unnecessary compliance burden on technology development. The next blog in this two-part series will engage with potential risk-based approaches to AI regulation.

The author would like to thank Jhalak Kakkar and Nidhi Singh for their helpful feedback.

Originally published at http://ccgnludelhi.wordpress.com on June 18, 2021.

--

--

Prateek Sibal
Prateek Sibal

I write on transnational governance of digital technologies, ethics of information and media economics.