New Artificial Intelligence Regulations On the Horizon

By Dave McKay | November 4, 2021

The Need for Legislation

Software vendors and technology manufacturers seem to be racing each other to include AI in their products. Service providers, social media platforms, and online shopping giants all use AI too. Amazon uses AI to suggest products to you based on your purchase, search, and viewing history. YouTube does something similar to compile video suggestions for you. AI also touches our lives in more subtle, behind-the-scenes, ways. For example, insurance companies are using AI to predict customer needs, detect fraud, and assess the risk of potential customers.

Big data isn’t new. Neither is the phrase “data is the new oil.” Data, like oil, has to be refined and processed before it can be used. AI is the perfect technology to work with these almost unmanageable volumes of data. AI can search for trends, make inferences, look for patterns of activity or behaviour, and much more. As the use of AI steadily increases, the way it touches and affects us grows too.

Whether you own any AI-enabled products or not, AI is being used by companies we have to interact with, and its capabilities are growing rapidly. The smarter the systems get, the more attractive they are to the manufacturers and service providers, and so their deployment increases. With increased deployment, it is inevitable that we—or more accurately, our data—will be handled by AI systems that make decisions that can affect us in material ways.

In response, governments and legislative bodies are moving to put in place governance and controls to try to protect the rights of human data subjects who have their data processed by AI systems.

The Broad Scope of Artificial Intelligence

Artificial intelligence doesn’t mean the software is self-aware, conscious, and sentient. It means that it can perform some function that would normally require an intelligent human being. If it can do something that requires human intelligence, it has to have some form of artificial intelligence. AI systems tend to be very good at specific tasks. Outside of their realm of expertise, they flounder. You wouldn’t get a sensible response from a self-driving car if you try to ask it about investing in stocks and shares. And you wouldn’t want a tech support chatbot to pilot your self-driving car.

Even within that narrower definition of AI, the capabilities of some systems can seem uncanny. Working with anonymized data, an AI system outperformed six radiologists in reading mammograms. IBM’s Project Debater is an AI system designed to debate with humans. To do that it must write and deliver a four-minute talk taking one side of an argument. It then listens to the other person’s talk and creates a two-minute summary of closing remarks.

In 1918 it won a debate and almost tied in another. It lost the match, but as it was competing with world-class debating champions just holding its own is a remarkable achievement. How to deal with the nuance of complex discourse is a tremendously difficult thing to capture in software.

News coverage of AI tends to report on the interesting and notable like IBM’s Watson winning at Jeopardy!, or the strange-but-true type of story like Hanson Robotic’s uber-chatbot Sophia which has been granted citizenship in Saudi Arabia.

A more ominous reporting concerns matters of life or death. A United Nations Security Council report dated March 8, 2021, contains an account of the use of Lethal Autonomous Weapon Systems—AI-powered armed drones—by the Libyan government against a breakaway military faction. Weaponized drones are regularly used worldwide. The difference is these were not under remote control with a human making the decisions. These were programmed to “attack targets without requiring data connectivity between the operator and the munition.”

The New Legislation

Europe

Some legislation already considers and addresses AI. Europe’s General Data Protection Regulation does this in several ways. Interestingly, the GDPR specifically defines a data subject as a natural person. A data subject cannot be an artificial entity of any kind. So AI systems themselves, no matter how lifelike nor close to self-awareness and sentience, are not awarded any rights under GDPR. More pertinently, the GDPR says that the processing of personally identifiable information—personal data, in other words—by any means of automated processing is governed by its Article 22, and a collection of associated recitals.

These give the data subject detailed rights. Any decision made by an automated system can be challenged by the data subject and the review must be performed by a human. Of course, the decision may have the same outcome, but the GDPR requires the company that is performing the processing to explain the rationale behind the outcome. At least you’ll know there is a reason for the decision and it isn’t just a glitch in an algorithm.

The European Union has proposed its own, supplementary, set of regulations for AI. These apply to users of AI located within the E.U. and to system providers implementing systems that will be used by E.U. citizens regardless of whether the provider is located within the E.U. It also applies to users and providers outside of the E.U. if the output of the system is used within the E.U.

The regulations rank AI systems according to their risk category.

  1. Some AI systems will be prohibited. These are systems considered to contravene E.U. values or to violate fundamental rights.
  2. Uses of AI systems that create a high risk.
  3. Uses of AI systems that create a limited risk, such as persuasion or manipulation by a chatbot.
  4. Uses of AI systems that create minimal risk.

Limited and minimal risk AI systems will be able to be created and used without additional legal obligations. Providers will be expected to adhere to a voluntary code of conduct. High-risk AI system providers must meet specific regulatory requirements in the development phase and after launch. For example, the quality of the data sets used to train machine learning systems must be quantified and recorded.

Human oversight and other governance measures must be applied too. Some form of auditing or logging must be put in place to allow auditing and compliance checks and to furnish users with answers to questions they have a right to raise. Some obligations may also apply to distributors and importers in the AI supply chain.

The United Kingdom

Post-Brexit, the United Kingdom is operating under their own version of GDPR contained in the U.K.’s Data Protection Act 2018. That gives them the freedom to implement changes. Data processing reforms are being discussed, as described in a consultation document available on the U.K. government website.

One of the objectives of the changes is the “monitoring, detecting or correcting bias in relation to developing AI systems.” This is to be followed by a National AI Strategy.

The United States

In the U.S. the situation is more complicated. Some states such as Alabama, Illinois, and California have all enacted legislation or have legislation pending related to the governance of AI. The CCPA, the California Consumer Protection Act, didn’t specifically address AI. The California Privacy Rights Act expands and amends the CCPA.

It provides data subjects the right to opt-out of processing performed by automated decision-making technology such as AI systems. Organizations must respond with “meaningful information about the logic involved in such decision-making processes” when asked by data subjects, together with a description of the likely outcome of the processing on the individual.

There are calls for a nationwide “bill of rights” to address the possibility of risks and bias in the processing performed by AI systems, and other concerns such as the widespread surveillance made possible by facial recognition systems. The White House’s Office of Science and Technology Policy has launched a fact-finding mission to that effect.

Eric Lander—President Biden’s chief science adviser—has described the need to enumerate data subjects’ rights regarding data processing by AI, and the need to uphold and safeguard those rights and the privacy of the individual.

Lander said:

“Enumerating the rights is just a first step. What might we do to protect them? Possibilities include the federal government refusing to buy software or technology products that fail to respect these rights, requiring federal contractors to use technologies that adhere to this ‘bill of rights,’ or adopting new laws and regulations to fill gaps.”

Needless to say, the regulation of Ai is a complex issue. It’s unlikely that the first round of legislation will be perfect. This is going to be an iterative process, quite possibly always one step behind the technology.

Source: Various, as referenced and Dave McKay