Berlin Strategy ist ein blog von Stefan Steinicke. Er Analysiert politische, Wirtschaftliche, Technologische und Gesellschaftliche Makrotrends und ihre Auswirkungen auf globale Machtverhältnisse.

Defining and understanding AI

Of the many formal and informal definitions of AI, most revolve around the appearance of intelligence. Famed mathematican Alan Turing, for example, defined machines as intelligent when a human interlocutor could not distinguish whether they were interacting with the machine or a human. These days, definitions often require the ability to act autonomously - e.g., to implement decisions based upon its own advice - or to limit the intelligence to specific narrow domains, so AI can show intelligence in playing chess or knowing which films somebody might like, but the same AI is not expected to be able to do both.

The European Commission´s Communication on AI for Europe defined AI as „systems that display intelligent behavior by analyzing their environment and taking actions - with some degree of autonomy - to achieve specific goals.“

Earlier this year, the Vermont General Assembly’s Artificial Intelligence Task Force adopted a longer definition of AI systems that was proposed by the European Commission’s High-Level Expert Group on AI that is “systems (usually software) capable of perceiving an environment through data acquisition and then processing and interpreting the derived information to take action(s) or imitate intelligent behavior given a specified goal. AI systems can also learn/adapt their behavior by analyzing how the environment is affected by prior actions.”

The German Bundestag ́s committee of inquiry in 2018 declared that “AI is a paradigm shift — from calculating to cognitive information technology. As AI can learn, it can also apply previously gained insights to new contexts. Hence, AI can independently plan processes, project outcomes, and interact with humans.”

Instead of proposing yet another definition, we should consider why AI is so difficult to define, and why that matters. AI has become an umbrella term that includes a wide range of technologies and an even wider range of application areas. It includes concrete and familiar items such as smart thermostats, as well as imaginary future technologies that might never come to exist, such as self- aware androids. In other words, these can include products that exist or that we might reasonably expect to be developed in the next few years, as well as those that belong in sci-fi movies. All of these technologies are AI, and this inclusivity seems to present a problem for both public debate and policy formulation because, depending on which AI you have in mind, its impacts and desirability can vary enormously. But in order to have a productive debate, do we need a shared understanding of what is being considered?

One approach could be to lock AI into a formal definition, but one single definition that satisfies everyone remains elusive. Many people are likely to continue to talk about “AI“ in the same way as before regardless of the use of a common, single definition.

Another approach could be to talk about these technologies with greater specificity. Instead of using the term “AI,” we could refer to specific techniques such as machine learning, specific applications such as facial recognition, or specific contexts and purposes such as identification of police suspects in public places. These examples can be combined for even greater specificity, e.g., “facial recognition tools trained by machine learning to identify suspects in public places.” This approach depends upon a range of other well-defined technologies and concepts, even if some of these remain subject to debate, such as the boundaries between public and private spaces. It requires a deeper understanding of AI techniques and applications, and the differences between them, but could allow for more precise and productive discussions.

These linguistic points might appear abstract but, in policy, they quickly become quite concrete. For example, the European Commission’s White Paper on AI suggests defining high-risk applications by a combination of techniques and applications. But because AI products deemed high-risk under this approach will face greater burdens in getting to market, the precise definitions of risk will be subject to serious debate.

It seems clear that dealing with the risks of AI is more important than the need for a single, formal definition that suits every purpose. Therefore, we set out a few key messages below that emerged from our discussions of AI policy in our different legislative contexts. These messages rest on a fundamental assumption that the advent of AI and its incorporation into all our daily lives has revolutionary potential. The broad application of electricity — from transportation to health care, agriculture to manufacturing — changed the way humans live, work, and think. AI might prove as game-changing to all life on this planet as the spread of electricity. In a best-case scenario, it could empower humans to grow and evolve in many ways. If applied well, it could solve some of today ́s most pressing socioeconomic and environmental challenges. In a worst-case scenario, the development of full AI could spell the destruction of our ecosystems and the end of the human race, as Stephen Hawking put it.

1. Everybody needs to get up to speed on AI

AI wil impact many sectors that so far have been left relatively untouched by previous waves of technological disruption, and the range of impacts will be of interest to almost any specific legislative committee. For this reason, AI policy debates are not limited to policymakers following industry, technology, and digital affairs, and other niche areas. Given the breadth of the impact of AI, everyone in the legislative community would benefit from getting up to speed on key AI developments and reflecting upon how they can prepare an appropriate response. If we want to empower people, we must educate them. The Vermont Artificial Intelligence Task Force “believes that an educated populous is the best way to prepare the state for the growth of artificial intelligence.” Public education and engagement about the impact of AI is important so that citizens can hold policymakers accountable.

2. Protect human agency

The structures and systems of our societies increasingly rely on data, and where data flows, AI follows. As AI evolves, it will become more central in decision-making, but also more complex. As Henry Kissinger suggests, „AI may soon be able to optimize situations in ways that are at least marginally different from how humans would optimize them. But at that point, will AI be able to explain, in a way that humans can understand, why its actions are optimal? Or will AI´s decision-making surpass the explanatory powers of human language and reason?“ This question is especially pertinent for policymakers who need to help citizens navigate their own decision-making in the age of AI while ensuring that algorithms with decision-making power are held to account.

3. Address uneven distribution of knowledge

AI can identify patterns that humans do not see and would seldom consider. For example, by analyzing large data banks, algorithms have found a correlation between those who charge their devices overnight and those who pay bills on time. However, while reservoirs of data are collected about individuals, individuals themselves are often unaware of the scale at which this data is collected, combined, and used to make predictions about their behavior and preferences. This significant imbalance - between those who control troves of data about many people and the individual subjects of this data - could present issues for consumer protection, and, given the complexity and opacity of AI systems, abuses may be difficult to identify.

4. Stay vigilant and focus on supporting the best AI possible

We often hear about the grave risks, even existential threats, of AI, as well as calls for optimism and the need to embrace the technology in order to reap the rewards. While it can be interesting to discuss utopian and dystopian AI futures, they may not provide a good context for policy discussions. As discussed above, AI is many things and comes with risks and opportunities. It follows, perhaps, that AI policy should do many things. We do not need to choose between accepting AI as it is or rejecting technological advancement. AI policy can support the very best AI applications that offer genuine social value, such as healthier lives and cleaner environments, while ensuring that citizens are protected — and empowered to protect themselves — from applications that erode consumer choice and protection or compromise democratic processes and social cohesion. This approach would put human agency back at the center of AI development and provide a basis for citizens to trust AI systems and governance.

For each of these messages, it is clear that policymakers would benefit from developing a more sophisticated understanding of the functionality and impacts of AI rather than focusing on the development of one common definition.

Article first published in The human program. A transatlantic agenda for reclaiming our digital future

The Geopolitics of AI. Avenues for renewed transatlantic cooperation

Der Bundestag, die Bühne der Nation