AI is like… A literature review of AI metaphors and why they matter for policy
Abstract
As AI systems have become increasingly capable and impactful, there has been significant public and policymaker debate over this technology’s impacts—and the appropriate legal or regulatory responses. Within these debates many have deployed—and contested—a dazzling range of analogies, metaphors, and comparisons for AI systems, their impact, or their regulation.
This report reviews why and how metaphors and analogies matter to both the study and practice of AI governance, in order to contribute to more productive dialogue and more reflective policymaking. It first reviews five stages at which different foundational metaphors play a role in shaping the processes of technological innovation, the academic study of their impacts, the regulatory agenda, the terms of the policymaking process, and legislative and judicial responses to new technology. It then surveys a series of cases where the choice of analogy materially influenced the internet policy issues as well as (recent) AI law issues. The report then provides a non-exhaustive survey of 55 analogies that have been given for AI technology and some of their policy implications. Finally, it discusses the risks of utilizing unreflexive analogies in AI law and regulation.
By disentangling the role of metaphors, analogies, and frames in these debates, and the space of analogies for AI, this survey does not aim to argue against the use or role of analogies in AI regulation—but rather to facilitate more reflective and productive conversations on these timely challenges.
Executive summary
This report provides an overview, taxonomy, and preliminary analysis of the role of basic metaphors and analogies in AI governance.
Aim: The aim of this report is to contribute to improved analysis, debate, and policy for AI systems by providing greater clarity around the way that analogies and metaphors can affect technology governance generally, around how they may shape AI governance, and about how to improve the processes by which some analogies or metaphors for AI are considered, selected, deployed, and reviewed.
Summary: In sum, this report:
- Draws on technology law scholarship to review five ways in which metaphors or analogies exert influence throughout the entire cycle of technology policymaking by shaping:
- patterns of technological innovation;
- the study of particular technologies’ sociotechnical impacts or risks;
- which of those sociotechnical impacts make it onto the regulatory agenda;
- how those technologies are framed within the policymaking process in ways that highlight some issues and policy levers over others; and
- how these technologies are approached within legislative and judicial systems.
- Illustrates these dynamics with brief case studies where foundational metaphors shaped policy for cyberspace, as well as for recent AI issues.
- Provides an initial atlas of 55 analogies for AI, which have been used in expert, policymaker, and public debate to frame discussion of AI issues, and discusses their implications for regulation.
- Reflects on the risks of adopting unreflexive analogies and misspecified (legal) definitions.
Below, the reviewed analogies are summarized in Table 1.
Table 1: Overview of surveyed analogies for AI (brief, without policy implications)
Theme | Frame (varieties) |
---|---|
Essence Terms focusing on what AI is | Field of science |
IT technology (just better algorithms, AI as a product) | |
Information technology | |
Robots (cyber-physical systems, autonomous platforms) | |
Software (AI as a service) | |
Black box | |
Organism (artificial life) | |
Brain | |
Mind (digital minds, idiot savant) | |
Alien (shoggoth) | |
Supernatural entity (god-like AI, demon) | |
Intelligence technology (markets, bureaucracies, democracies) | |
Trick (hype) | |
Operation Terms focusing on how AI works | Autonomous system |
Complex adaptive system | |
Evolutionary process | |
Optimization process | |
Generative system (generative AI) | |
Technology base (foundation model) | |
Agent | |
Pattern-matcher (autocomplete on steroids, stochastic parrot) | |
Hidden human labor (fauxtomation) | |
Relation Terms focusing on how we relate to AI, as (possible) subject | Tool (just technology) |
Animal | |
Moral patient | |
Moral agent | |
Slave | |
Legal entity (digital person, electronic person, algorithmic entity) | |
Culturally revealing object (mirror to humanity, blurry JPEG of the web) | |
Frontier (frontier model) | |
Our creation (mind children) | |
Next evolutionary stage or successor | |
Function Terms focusing on how AI is or can be used | Companion (social robots, care robots, generative chatbots, cobot) |
Advisor (coach, recommender, therapist) | |
Malicious actor tool (AI hacker) | |
Misinformation amplifier (computational propaganda, deepfakes, neural fake news) | |
Vulnerable attack surface | |
Judge | |
Weapon (killer robot, weapon of mass destruction) | |
Critical strategic asset (nuclear weapons) | |
Labor enhancer (steroids, intelligence forklift) | |
Labor substitute | |
New economic paradigm (fourth industrial revolution) | |
Generally enabling technology (the new electricity / fire / internal combustion engine) | |
Tool of power concentration or control | |
Tool for empowerment or resistance (emancipatory assistant) | |
Global priority for shared good | |
Impact Terms focusing on the unintended risks, benefits or side-effects of AI | Source of unanticipated risks (algorithmic black swan) |
Environmental pollutant | |
Societal pollutant (toxin) | |
Usurper of human decision-making authority | |
Generator of legal uncertainty | |
Driver of societal value shifts | |
Driver of structural incentive shifts | |
Revolutionary technology | |
Driver of global catastrophic or existential risk |
Introduction
Everyone loves a good analogy like they love a good internet meme—quick, relatable, shareable,1 memorable, and good for communicating complex topics to family.
Background: As AI systems have become increasingly capable and have had increasingly public impacts, there has been significant public and policymaker debate over the technology. Given the breadth of the technology’s application, many of these discussions have come to deploy—and contest—a dazzling range of analogies, metaphors, and comparisons for AI systems in order to understand, frame, or shape the technologies’ impact and its regulation.2 Yet the speed with which many often jump to invoke particular metaphors—or to contest the accuracy of others—leads to frequent confusion over these analogies, how they are used, and how they are best evaluated or compared.3
Rationale: Such debates are not just about wordplay—metaphors matter. Framings, metaphors, analogies, and (at the most specific end) definitions can strongly affect many key stages of the world’s response to a new technology, from the initial developmental pathways for technology, to the shaping of policy agendas, to the efficacy of legal frameworks.4 They have done so consistently in the past, and we have reason to believe they will especially do so for (advanced) AI. Indeed, recent academic, expert, public, and legal contests around AI often already strongly turn on “battles of analogies.”5
Aim: Given this, there is a need for those speaking about AI to better understand (a) when they speak in analogies—that is, when the ways in which AI is described (inadvertently) import one or more foundational analogies; (b) what it does to utilize one or another metaphor for AI; (c) what different analogies could be used instead; (d) how the appropriateness of one or another metaphor is best evaluated; and (e) what, given this, might be the limits or risks of jumping at particular analogies.
This report aims to respond to these questions and contribute to improved analysis, debate, and policy by providing greater clarity around the role of metaphors in AI governance, the range of possible (alternate) metaphors, and good practices in constructing and using metaphors.
Caveats: The aim here is not to argue against the use of any analogies in AI policy debates—if that were even possible. Nor is it to prescribe (or dismiss) one or another metaphor for AI as “better” (or “worse”) per se. The point is not that one particular comparison is the best and should be adopted by all, or that another is “obviously” flawed. Indeed, in some sense, a metaphor or analogy cannot be “wrong,” only more tenuous and more or less suitable when considered from the perspective of some values or some (regulatory) purpose. As such, different metaphors may work best in different contexts. Given this, this report highlights the diversity of analogies in current use and provides context for more informed future discourse and policymaking.
Terminology: Strictly speaking, there is a difference between a metaphor—“an implied comparison between two things of unlike nature that yet have something in common”—and an analogy—“a non-identical or non-literal similarity comparison between two things, with a resulting predictive or explanatory effect.”6 However, while in legal contexts the two can be used in slightly different ways, cognitive science suggests that humans process information by metaphor and by analogy in similar ways.7 As a result, within this report, “analogy” and “metaphor” will be used relatively interchangeably to refer to (1) communicated framings of an (AI) issue that describe that issue (2) through terms, similes, or metaphors which rely on, invoke, or importreferences to a different phenomenon, technology, or historical event, which (3) is (assumed to be) comparable in one or more ways (e.g., technical, architectural, political, or moral) (4) which are relevant to evaluating or responding to the (AI) issue at hand. Furthermore, the report will use the term “foundational metaphor” to discuss cases where a particular metaphor for the technology has become deeply established and embedded within larger policy programs, such that the nature of the metaphor as a metaphor may even become unclear.
Structure: Accordingly, this report now proceeds as follows. In Part I, it discusses why and how definitions matter to both the study and practice of AI governance. It reviews five ways in which analogies or definitions can shape technology policy generally. To illustrate this, Part II reviews a range of cases in which deeply ingrained foundational metaphors have shaped internet policy as well as legal responses to various AI uses. In Part III, this report provides an initial atlas of 55 different analogies that have been used for AI in recent years, along with some of their regulatory implications. Part IV briefly discusses the risks of using analogies in unreflexive ways.
I. How metaphors shape technology governance
Given the range of disciplinary backgrounds in debates over AI, we should not be surprised that the technology is perceived and understood differently by many.
Nonetheless, it matters to get clarity, because terminological and analogical framing effects happen at all stages in the cycle from technological development to societal response. They can shape the initial development processes for technologies as well as the academic fields and programs that study their impacts.8 Moreover, they can shape both the policymaking processes and the downstream judicial interpretation and application of legislative texts.
1. Metaphors shape innovation
Metaphors and analogies are strongly rooted in human psychology.9 Even some nonhuman animals think analogically.10 Indeed, human creativity has even been defined as “the capacity to see or interpret a problematic phenomenon as an unexpected or unusual instance of a prototypical pattern already in one’s conceptual repertoire.”11
Given this, metaphors and analogies can shape and constrain the ability of humans to collectively create new things.12 In this way, technology metaphors can affect the initial human processes of invention and investment that drive the development of AI and other technologies in the first place. It has been suggested that foundational metaphors can influence the organization and direction of scientific fields—and even that all scientific frameworks could to some extent be viewed as metaphors.13 For example, the fields of cell biology and biotechnology have for decades been shaped by the influential foundational metaphor that sees biological cells as “machines,” which has led to sustained debates over the scientific use and limits of that analogy in shaping research programs.14
More practically, at the development and marketing stage, metaphors can shape how consumers and investors assess proposed startup ideas15 and which innovation paths attract engineer, activist, and policymaking interest and support. In some such cases, metaphors can support and spur on innovation; for instance, it has been argued that through the early 2000s, the coining of specific IT metaphors for electric vehicles—as a “computer on wheels”—played a significant role in sustaining engineer support for and investment in this technology, especially during an industry downturn in the wake of General Motors’ sudden cancellation of its EV1 electric car.16
Conversely, metaphors can also hold back or inhibit certain pathways of innovation; for instance, in the Soviet Union in the early 1950s, the field of cybernetics (along with other fields such as genetics or linguistics) fell victim to anti-American campaigns, which characterized it as “an ‘obscurantist’, ‘bourgeois pseudoscience’”.17 While this did not affect the early development of Soviet computer technology (which was highly prized by the state and the military), the resulting ideological rejection of the “man-machine” analogy by Marxist-Leninist philosophers led to an ultimately dominant view, in Soviet sciences, of computers as solely “tools to think with” rather than “thinking machines,” holding back the consolidation of the field (such that even the label “AI” would not be recognized by the Soviet Academy of Sciences until 1987) and shifting research attention into projects that focused on the “situational management” of large complex systems rather than the pursuit of human-like thinking machines.18 This stood in contrast to US research programs, such as DARPA’s 1983–1993 Strategic Computing Initiative, an extensive, $1 billion program to achieve “machines that think.”19
2. Metaphors inform the study of technologies’ impacts
Particular definitions also shape and prime academic fields that study the impacts of these technologies (and which often may uncover or highlight particular developments as issues for regulation). Definitions affect which disciplines are drawn to work on a problem, what tools they bring to hand, and how different analyses and fields can build on one another. For instance, it has been argued that the analogy between software code and legal text has supported greater and more productive engagement by legal scholars and practitioners with such code at the level of its (social) meaning and effects (rather than narrowly on the level of the techniques used).20 Given this, terminology can affect how AI governance is organized as a field of analysis and study, what methodologies are applied, and what risks or challenges are raised or brought up.
3. Metaphors set the regulatory agenda
More directly, particular definitions or frames for a technology can set and shape the policymaking agenda in various ways.
For instance, terms and frames can raise (or suppress) policy attention for an issue, affecting whether policymakers or the public care (enough) about a complex and often highly technical topic in the first place to take it up for debate or regulation. For instance, it has been argued that framings that focus on the viscerality of the injuries inflicted by a new weapon system have in the past boosted international campaigns to ban blinding lasers and antipersonnel mines, yet they ended up being less successful in spurring effective advocacy around “killer robots.”21
Moreover, metaphors—and especially specific definitions—can shape (government) perceptions of the empirical situation or state of play around a given issue. For instance, the particular definition used for “AI” can directly affect which (industrial or academic) metrics are used to evaluate different states’ or labs’ relative achievements or competitiveness in developing the technology. In turn, that directly shapes downstream evaluations of which nation is “ahead” in AI.22
Finally, terms can frame the relevant legal actors and policy coalitions, enabling (or inhibiting) inclusion and agreement at the level of interest or advocacy groups that push for (or against) certain policy goals. For instance, the choice for particular terms or framings that meet with broad agreement or acceptance amongst many actors can make it easier for a diverse set of stakeholders to join together in pushing for regulatory actions. However, such agreement may be fostered by definitional clarity, when terms or frames are transparent and meet with wider acceptance, or because of definitional ambiguity, when a broad term (such as “ethical AI”) allows for sufficient ambiguity that different actors can meet on an “incompletely theorized agreement”23 to pursue a shared policy program on AI.
4. Metaphors frame the policymaking process
Terms can have a strong overall effect on policy issue-framing, foregrounding different problem portfolios as well as regulatory levers. For instance, early societal debates around nanotechnology were significantly influenced by analogies with asbestos and genetically modified organisms.24
Likewise, regulatory initiatives that frame AI systems as “products” imply that these fit easily within product safety frameworks—even if that may be a poor or insufficient model for AI governance, for instance because it is a model that fails to address any risks at the developmental stage25 or because it fails to accurately focus on fuzzier impacts on fundamental rights if those cannot be easily classified as consumer harms.26
This is not to say that the policy-shaping influence of terms (or explicit metaphors) is absolute and irrevocable. For instance, in a different policy domain, a 2011 study found that using metaphors that described crime as a “beast” led study participants to recommend law-and-order responses, whereas describing it as a “virus” led them to put more emphasis on public-health-style policies. However, even under the latter framing, law-and-order policy responses still prevailed, simply commanding a smaller majority than they would otherwise.27
Nonetheless, metaphors do exert sway throughout the policymaking process. For instance, they can shape perceptions of the feasibility of regulation by certain routes. As an example, framings of digital technologies that emphasize certain traits of technologies—such as the “materiality” or “seeming immateriality,” or the centralization or decentralization, of technologies like submarine cables, smart speakers, search engines, or the bitcoin protocol—can strongly affect perceptions of whether, or by what routes, it is most feasible to regulate that technology at the global level.28
Likewise, different analogies or historical comparisons for proposed international organizations for AI governance—ranging from the IAEA and IPCC to the WTO or CERN—often import tacit analogical comparisons (or rather constitute “reflected analogies”) between AI and those organizations’ subject matter or mandates in ways that shape the perceptions of policymakers and the public regarding which of AI’s challenges require global governance, whether or which new organizations are needed, and whether the establishment of such organizations will be feasible.29
5. Metaphors and analogies shape the legislative & judicial response to tech
Finally, metaphors, broad analogies, and specific definitions can frame legal and judicial treatment of a technology in both the ex ante application of AI-focused regulations and the ex post subsequent judicial interpretation of either such AI-specific legislation or of general regulations in the context of cases involving AI.
Indeed, much of legal reasoning, especially in court systems, and especially in common law jurisdictions, is deeply analogical.30 This is for various reasons.31 For one, legal actors are also human, and strong features of human psychology can skew these actors towards the use of analogies that refer to known and trusted categories: as such, as Mandel has argued, “availability and representativeness heuristics lead people to view a new technology and new disputes through existing frames, and the status quo bias similarly makes people more comfortable with the current legal framework.”32 This is particularly the case because much of legal scholarship and work aims to be “problem-solving” rather than “problem-finding”33 and to respond to new problems by appealing to pre-existent (ethical or legal) principles, norms, values, codes, or laws.34 Moreover, from an administrative perspective, it is often easier and more cost-effective to extend existing laws by analogy.
Finally, and more fundamentally, the resort to analogy by legal actors can be a shortcut that aims to apply the law, and solve a problem, through an “incompletely theorized agreement” that does not require reopening contentious questions or debates over the first principles or ultimate purposes of the law,35 or renegotiating hard-struck legislative agreements. This is especially the case at the level of international law, where either negotiating new treaties or explicitly amending multilateral treaties to incorporate a new technology within an existing framework can be wrought, drawn-out processes36 such that many actors may prefer ultimately addressing new issues (such as cyberwar) within existing norms or principles by analogizing them to well-established and well-regulated behaviors.37
Given this, when confronted with situations of legal uncertainty—as often happens with a new technology38—legal actors may favor the use of analogies to stretch existing law or to interpret new cases as falling within existing doctrine. That does not mean that courts need immediately settle or converge on one particular “right” analogy. Indeed, there are always multiple analogies possible, and these can have significantly different implications for how the law is interpreted and applied. That means that many legal cases involving technology will involve so-called “battles of analogies.”39 For example, in recent class action lawsuits that have accused generative AI providers such as Stable Diffusion and Midjourney of copyright infringement, plaintiffs have argued that these generative AI models are “essentially sophisticated collage tools, with the output representing nothing more than a mash-up of the training data, which is itself stored in the models as compressed copies.”40 Some have countered that this analogy suffers some technical inaccuracies, since current generative AI models do not store compressed copies of the training data, such that a better analogy would be that of an “art inspector” that takes every measurement possible—implying that model training either is not governed by copyright law or constitutes fair use.41
Finally, even if specific legislative texts move to adopt clear, specific statutory definitions for AI—in a way that avoids (explicit) comparison or analogy with other technologies or behavior—this may not entirely avoid framing effects. Most obviously, legislative definitions for key terms such as “AI” obviously affect the material scope of regulations and policies that use and define such terms.42 Indeed, the effects of particular definitions have impacts on regulation not only ex ante but also ex post: in many jurisdictions, legal terms are interpreted and applied by courts based on their widely shared “ordinary meaning.”43 This means, for instance, that regulations that refer to terms such as “advanced AI,” “frontier AI,” or “transformative AI”44 might not necessarily be interpreted or applied in ways that are in line with how the term is understood within expert communities.45
All of this underscores the importance of our choice of terms and frames—whether broad and indirect metaphors or concrete and specific legislative definitions—when grappling with the impacts of this technology on society.
II. Foundational metaphors in technology law: Cases
Of course, these dynamics are not new and have been studied in depth in fields such as cyberlaw, law and technology, and technology law.46 For instance, we can see many of these framing dynamics within societal (and regulator) responses to other cornerstone digital technologies.
1. Metaphors in internet policy: Three cases
For instance, for the complex sociotechnical system47 commonly called the internet, foundational metaphors have strongly shaped regulatory debates, at times as much as sober assessments of the nuanced technical details of the artifacts involved have.48 As noted by Rebecca Crootof:
“A ‘World Wide Web’ suggests an organically created common structure of linked individual nodes, which is presumably beyond regulation. The ‘Information Superhighway’ emphasizes the import of speed and commerce and implies a nationally funded infrastructure subject to federal regulation. Meanwhile, ‘cyberspace’ could be understood as a completely new and separate frontier, or it could be viewed as yet one more kind of jurisdiction subject to property rules and State control.”49
For example, different terms (and the foundational metaphors they entail) have come to shape internet policy in various ways and domains. Take for instance the following cases:
Institutional effects of framing cyberwar policy within cyber-“space”: For over a decade, the US military framed the internet and related systems as a “cyberspace”—that is, just another “domain” of conflict along with land, sea, air, and space—leading to strong consequences institutionally (expanding the military’s role in cybersecurity and supporting the creation of US Cyber Command) as well as for how international law has subsequently been applied to cyber operations.50
Issue-framing effects of regulating data as “oil,” “sunlight,” “public utility,” or “labor”: Different metaphors for “data” have drastically different political and regulatory implications.51 The oil metaphor emphasizes data as a valuable traded commodity that is owned by whoever “extracts” it and that, as a key resource in the modern economy, can be a source of geopolitical contestation between states. However, the oil metaphor implies that the history of data prior to its collection is not relevant and so sidesteps questions of any “misappropriation or exploitation that might arise from data use and processing.”52 Moreover, even within an regulatory approach that emphasizes geopolitical competition over AI, one can still critique the “oil” metaphor as misleading, for instance because of the ways in which it skews debates over how to assess “data competitiveness” in military AI.53 By contrast, the sunlight metaphor emphasizes data as a ubiquitous public resource that ought to be widely pooled and shared for social good, de-emphasizing individual data privacy claims; the public utility metaphor sees data as an “infrastructure” that requires public investment and new institutions, such as data trusts or personal data stores, to guarantee “data stewardship”; and the labor frame asserts the ownership rights of the individuals generating data against what are perceived as extractive or exploitative practices of “surveillance capitalism.”54
Judicial effects of treating search engines as “newspaper editorials” in censorship cases: In the mid-2000s, US court rulings involving censorship on search engines tended to analyze them by analogy to older technologies such as the newspaper editorial.55 As these examples suggest, different terms and their metaphors matter. They serve as intuition pumps for key audiences (public, policy) that otherwise may have significant disinterest in, lack of expertise in, inferential distance to, or limited bandwidth for new technologies. Moreover, as seen in social media platforms and online content aggregators’ resistance to being described as “media companies” rather than “technology companies,”56 even seemingly innocuous terms can carry significant legal and policy implications—in doing so, such terms can serve as a legal “sorter,” determining whether a technology (or the company developing and marketing it) is considered as falling into one or another regulatory category.57
2. Metaphors in AI law: Three cases
Given the role of metaphors and definitions to strongly shape the direction and efficacy of technology law, we should expect them to likewise play a strong role in affecting the framing and approach of AI regulation in the future, for better or worse. Indeed, in a range of domains, they have already done so:
Autonomous weapons systems under international law: International lawyers often aim to subsume new technologies under (more or less persuasive) analogies to existing technologies or entities that are already regulated.58 As such, different analogies have been drawn between autonomous weapons systems to weapons, combatants, child soldiers, or animal combatants—all of which lead to very different consequences for their legality under international humanitarian law.59
Release norms for AI models with potential for misuse: In debates over the potential misuse risks from emerging AI systems, efforts to attempt to restrict or slow publication of new systems with potential for misuse have found themselves challenged by framings that pitch the field of AI as being intrinsically an open science (where new findings should be shared whatever the risks) versus those that emphasize analogies to cybersecurity (where dissemination can help defenders protect against exploits). Critically, however, both of these analogies may misstate or underappreciate the dynamics that affect the offense-defense balance of new AI capabilities: while in information security the disclosure of software vulnerabilities has traditionally favored defense, this cannot be assumed for AI research, where (among others) it can be much more costly or intractable to “patch” the social vulnerabilities exploited by AI capabilities.60
Liability for inaccurate or unlawful speech produced by AI chatbots, large language models, and other generative AI: In the US, Section 230 of the 1996 Communications Decency Act protects online service providers from liability for user-generated content that they host and has accordingly been considered a cornerstone to the business model of major online platforms and social media companies.61 For instance, in Spring 2023, the US Supreme Court took up two lawsuits—Gonzales v. Google and Twitter v. Taamneh—which could have shaped Section 230 protections for algorithmic recommendations.62 While the Court’s rulings on these cases avoided addressing the issue,63 similar court cases (or legislation) could have strong implications for whether digital platforms or social media companies will be held liable for unlawful speech produced by large language model-based AI chatbots.64 If such AI chatbots are analogized to existing search engines, they might be able to rely on a measure of protection from Section 230, greatly facilitating their deployment, even if they link to inaccurate information. Conversely, if these chatbot systems are considered so novel and creative that their output goes beyond the functions of a search engine, they might instead be considered as “information content providers” within the remit of the law—or simply held to be beyond the law’s remit (and protection) entirely.65 This would mean that technology companies would be held legally responsible for their AI’s outputs. If that were the case, this reading of the law would significantly restrict the profitability of many AI chatbots, given the tendency of the underlying LLMs to “hallucinate” facts.66
All this again highlights that different definitions or terms for AI will frame how policymakers and courts understand the technology. This creates a challenge for policy, which must address the transformative impact and potential risks of AI as they are (and as they may soon be), and not only as they can be easily analogized to other technologies and fields. What does that mean in the context of developing AI policy in the future?
III. An atlas of AI analogies
Development of policy must contend with the lack of settled definitions for the term “AI,” with the varied concepts and ideas projected onto it, and with the pace at which new terms —from “foundation models” to “generative AI”—are often coined and adopted.67
Indeed, this breadth of analogies that are coined around AI should not be surprising, given that even just the term “artificial intelligence” has a number of aspects that support conceptual fluidity (or alternately, confusion). This is for various reasons.68 In the first place, the term invokes a term—“intelligence”—which is in widespread and everyday use, and which for many people has strong (evaluative or normative) connotations. It is essentially a suitcase word that packages together many competing meanings,69 even while it hides deep and perhaps even intractable scientific and philosophical disagreement70 and significant historical and political baggage.71
Secondly, and in contrast to, say, “blockchain ledgers,” AI technology comes with a baggage of decades of depictions in popular culture—and indeed centuries of preceding stories about intelligent machines72—resulting in a whole genre of tropes or narratives that can color public perceptions and policymaker debates.
Thirdly, AI is an evocative general-purpose technology that sees use in a wide variety of domains and accordingly has provoked commentary from virtually every disciplinary angle, including neuroscience, philosophy, psychology, law, politics, and ethics. As a result of this, a persistent challenge in work on AI governance—and indeed, in the broader public debates around AI—has been that different people use the word “AI” to refer to widely different artifacts, practices, or systems, or operate on the basis of definitions or understandings which package together a range of implicit assumptions.73
Thus, it is no surprise that AI has been subjected to a diverse range of analogies and frames. To understand potential implications of AI analogies, we can draw a taxonomy of common framings of AI (see Table 2), whereby we can distinguish between analogies that focus on:
- the essence or nature of AI (what AI “is”),
- AI’s operation (how AI works),
- our relation to AI (how we relate to AI as subject),
- AI’s societal function (how AI systems are or can be used),
- AI’s impact (the unintended risks, benefits, and other side-effects of AI).
Table 2: Atlas of AI analogies, with framings and selected policy implications
Theme | Frame (examples) | Emphasizes to policy actors (e.g.) |
---|---|---|
Essence Terms focusing on what AI is | Field of science74 | Ensuring scientific best practices; improving methodologies, data sharing, and benchmark performance reporting methodologies to avoid replicability problems;75 ensuring scientific freedom and openness rather than control and secrecy.76 |
IT technology (just better algorithms, AI as a product77) | Business-as-usual; industrial applications; conventional IT sector regulation. Product acquisition & procurement processes; product safety regulations. |
|
Information technology78 | Economic implications of increasing returns to scale and income distribution vs. distribution of consumer welfare; facilitation of communication and coordination; effects on power balances. | |
Robots (cyber-physical systems,79 autonomous platforms) | Physicality; embodiment; robotics; risks of physical harm;80 liability; anthropomorphism; embedment in public spaces. | |
Software (AI as a service) | Virtuality; digitality; cloud intelligence; open-source nature of development process; likelihood of software bugs.81 | |
Black box82 | Opacity; limits to explainability of a system; risks of loss of human control and understanding; problematic lack of accountability. But also potentially de-emphasizes human decisions and their value judgments behind an algorithmic system, and presents the technology as monolithic, incomprehensible, and unalterable.83 | |
Organism (artificial life) | Ecological “messiness”; ethology of causes of “machine behavior” (development, evolution, mechanism, function).84 | |
Brains | Applicability of terms and concepts from neuroscience; potential anthropomorphization of AI functionalities along human traits.85 | |
Mind (digital minds,86 idiot savant87) | Philosophical implications; consciousness, sentience, psychology. | |
Alien (shoggoth88) | Inhumanity, incomprehensibility, deception in interactions | |
Supernatural entity (god-like AI,89 demon90) | Force beyond human understanding or control. | |
Intelligence technology91 (markets, bureaucracies, democracies92) | Questions of bias, principal-agent alignment and control. | |
Trick (hype) | Potential of AI exaggerated; questions of unexpected or fundamental barriers to progress, friction in deployment; “hype” as smokescreen or distraction from social issues. | |
Operation Terms focusing on how AI works | Autonomous system | Different levels of autonomy; human-machine interactions; (potential) independence from “meaningful human control”; accountability & responsibility gaps. |
Complex adaptive system | Unpredictability; emergent effects; edge case fragility; critical thresholds; “normal accidents”.93 | |
Evolutionary process | Novelty, unpredictability, or creativity of outcomes;94 “perverse” solutions and reward hacking. | |
Optimization process95 | Inapplicability of anthropomorphic intuitions about behavior.96 Risks of the system optimizing for the wrong targets or metrics;97 Goodhart’s Law;98 risks from “reward hacking”. | |
Generative system (generative AI) | Potential “creativity” but also unpredictability of system; resulting “credit-blame asymmetry” where users are held responsible for misuses, but can claim less credit for good uses, shifting workplace norms.99 | |
Technology base (foundation model) | Adaptability of system to different purposes; potential for downstream reuse and specialization, including for unanticipated or unintended uses; risk that any errors or issues at the foundation-level seep into later or more specialized (fine-tuned) models;100 questions of developer liability. | |
Agent101 | Responsiveness to incentives and goals; incomplete-contracting and principal-agent problems;102 surprising, emergent, and harmful multi-agent interactions103 systemic, delayed societal harms and diffusion of power away from humans.104 | |
Pattern-matcher (autocomplete on steroids,105 stochastic parrot106) | Problems of bias; mimicry of intelligence; absence of “true understanding”; fundamental limits. | |
Hidden human labor (fauxtomation107) | Potential of AI exaggerated; “hype” as a smokescreen or distraction from extractive underlying practices of human labor in AI development. | |
Relation Terms focusing on how we relate to AI, as (possible) subject | Tool (just technology, intelligent system108) | Lack of any special relation towards AI, as AI is not a subject; questions of reliability and engineering. |
Animal109 | Entities capable of some autonomous action, yet lacking full competence or ability of humans. Accordingly may be potentially deserving of empathy and/or (some) rights110 or protections against abusive treatment, either on their own terms111 or in light of how abusive treatment might desensitize and affect social behavior amongst humans;112 questions of legal liability and assignment of responsibility to robots,113 especially when used in warfare.114 | |
Moral patient115 | Potential moral (welfare) claims by AI, conditional on certain properties or behavior. | |
Moral agent | Machine ethics; ability to encode morality or moral rules. | |
Slave116 | AI systems or robots as fully owned, controlled, and directed by humans; not to be humanized or granted standing. | |
Legal entity (digital person, electronic person,117 algorithmic entity118) | Potential of assigning (partial) legal personhood to AI for pragmatic reasons (e.g., economic, liability, or risks of avoiding “moral harm”), without necessarily implying deep moral claims or standing. | |
Culturally revealing object (mirror to humanity,119 blurry JPEG of the web120) | Generally, implications of how AI is featured in fictional depictions and media culture.121 Directly, AI’s biases and flaws as a reflection of human or societal biases, flaws, or power relations. May also imply that any algorithmic bias derives from society rather than the technology per se.122 | |
Frontier (frontier model123) | Novelty in terms of both capabilities (increased capability and generality) and/or in form (e.g., scale, design, or architectures) compared to other AI systems; as a result, new risks because of new opportunities for harm, and less well-established understanding by the research community. Broadly, implies danger and uncertainty but also opportunity; may imply operating within a wild, unregulated space, with little organized oversight. |
|
Our creation (mind children124) | “Parental” or procreative duties of beneficence; humanity as good or bad “example.” | |
Next evolutionary stage or successor | Macro-historical implications; transhumanist or posthumanist ethics & obligations. | |
Function Terms focusing on How AI is-, or can be used | Companion (social robots, care robots, generative chatbots, cobot125) | Human-machine interactions; questions of privacy, human over-trust, deception, and human dignity. |
Advisor (coach, recommender, therapist) | Questions of predictive profiling, “algorithmic outsourcing” and autonomy, accuracy, privacy, impact on our judgment and morals.126 Questions of patient-doctor confidentiality, as well as “AI loyalty” debates over fiduciary duties that can ensure AI advisors act in their users’ interests.127 | |
Malicious actor tool (AI hacker128) | Possible misuse by criminals or terrorist actors. Scaling up of attacks as well as enabling entirely new attacks or crimes.129 | |
Misinformation amplifier (computational propaganda,130 deepfakes, neural fake news131) | Scaling up of online mis- and disinformation; effect on “epistemic security”;132 broader effects on democracy, electoral integrity.133 | |
Vulnerable attack surface134 | Susceptibility to adversarial input, spoofing, or hacking. | |
Judge135 | Questions of due process and rule of law; questions of bias and potential self-corrupting feedback loops based on data corruption.136 | |
Weapon (killer robot,137 weapon of mass destruction138) | In military contexts, questions of human dignity,139 compliance with laws of war, tactical effects, strategic effects, geopolitical impacts, and proliferation rates. In civilian contexts, questions of proliferation, traceability, and risk of terror attacks. | |
Critical strategic asset (nuclear weapons)140 | Geopolitical impacts; state development races; global proliferation. | |
Labor enhancer (steroids,141 intelligence forklift142) | Complementarity with existing human labor and jobs; force multiplier on existing skills or jobs; possible unfair advantages & pressure on meritocratic systems.143 | |
Labor substitute | Erosive to or threatening of human labor; questions of retraining, compensation, and/or economic disruption. | |
New economic paradigm (fourth industrial revolution) | Changes in industrial base; effects on political economy. | |
Generally enabling technology (the new electricity / fire / internal combustion engine144) | Widespread usability; increasing returns to scale; ubiquity; application across sectors; industrial impacts; distributional implications; changing the value of capital vs. labor; impacting inequality.145 | |
Tool of power concentration or control146 | Potential for widespread social control through surveillance, predictive profiling, perception control. | |
Tool for empowerment or resistance (emancipatory assistant147) | Potential for supporting emancipation and/or civil disobedience.148 | |
Global priority for shared good | Global public good; opportunity; benefit & access sharing. | |
Impact Terms focusing on the unintended risks, benefits or side-effects of AI | Source of unanticipated risks (algorithmic black swan149) | Prospects of diffuse societal-level harms or catastrophic tail-risk events, unlikely to be addressed by market forces; accordingly highlights paradigms of “algorithmic preparedness”150 and risk regulation more broadly.151 |
Environmental pollutant | Environmental impacts of AI supply chain;152 significant energy costs of AI training. | |
Societal pollutant (toxin153) | Erosive effects of AI on quality and reliability of the online information landscape. | |
Usurper of human decision-making authority | Gradual surrender of human autonomy and choice and/or control over the future. | |
Generator of legal uncertainty | Driver of legal disruption to existing laws;154 driving new legal developments. | |
Driver of societal value shifts | Driver of disruption to and shifts in public values;155 value erosion. | |
Driver of structural incentive shifts | Driver of changes in our incentive landscape; lock-in effects; coordination problems. | |
Revolutionary technology156 | Macro-historical effects; potential impact on par with agricultural or industrial revolution. | |
Driver of global catastrophic or existential risk | Potential catastrophic risks from misaligned advanced AI systems or from nearer-term “prepotent” systems;157 questions of ensuring value-alignment; questions of whether to pause or halt progress towards advanced AI.158 |
Different terms for AI can therefore invoke different frames of reference or analogies. Use of analogies—by policymakers, researchers, or the public—may be hard to avoid, and they can often serve as fertile intuition pumps.
IV. The risks of unreflexive analogies
However, while metaphors can be productive (and potentially irreducible) in technology law, they also come with many risks. Given that analogies are shorthands or heuristics that compress or highlight salient features, challenges can creep in the more removed they are from the specifics of the technology in question.
Indeed, as Crootof and Ard have noted, “[a]n analogy that accomplishes an immediate aim may gloss over critical distinctions in the architecture, social use, or second-order consequences of a particular technology, establishing an understanding with dangerous and long-lasting implications.”159
Specifically:
- The selection and foregrounding of a certain metaphor hides that there are always multiple analogies possible for any new technology, and each of these advances different “regulatory narratives.”
- Analogies can be misleading by failing to capture a key trait of the technology or by alleging certain characteristics that do not actually exist.
- Analogies limit our ability to understand the technology—in terms of its possibilities and limits—on its own terms.160
The challenge is that unreflexive drawing of analogies in a legal context can lead to ineffective or even dangerous laws,161 especially once inappropriate analogies become entrenched.162
However, even if one tries to avoid explicit analogies between AI and other technologies, apparently “neutral” definitions of AI that seek to focus solely on the technology’s “features” can and still do frame policymaking in ways that may not be neutral. For instance, Kraftt and colleagues found that whereas definitions of AI that emphasize “technical functionality” are more widespread among AI researchers, definitions that emphasize “human-like performance” are more prevalent among policymakers, which they suggest might prime policymaking towards future threats.163
As such, it is not just loose analogies or comparisons that can affect policy, but also (seemingly) specific technical or legislative terms. The framing effects of such terms do not only occur at the level of broad policy debates but can also have strong legal implications. In particular, they can create challenges for law when narrowly specified regulatory definitions are suboptimal.164
This creates twin challenges. On the one hand, picking suitable concepts or categories can be difficult at an early stage of a technology’s development and deployment, when its impacts and limits are not always fully understood.165 At the same time, the costs of picking and locking in the wrong terms or framings within legislative texts can be significant.
Specifically, beyond the opportunity costs of establishing better concepts or terms, unreflexively establishing legal definitions for key terms can create the risk of later, downstream “governance misspecification.”166 Such misspecification can occur when regulation is originally targeted at a particular artifact or (technological) practice through a particular material scope and definition for those objects. The implicit assumption here is that the term in question is a meaningful proxy for the underlying societal or legal goals to be regulated. While that may be appropriate in many cases, there is a risk that the law becomes less efficient, ineffective, or even counterproductive if either initial misapprehension of the technology or subsequent technological developments lead to that proxy term coming apart from the legislative goals.167 Such misspecification can be seen in various cases of technology governance and regulation, including 1990s US export control thresholds for “high-performance computers” that treated the technology as far too static;168 the Outer Space Treaty’s inability to anticipate later Soviet Fractional Orbital Bombardment System (FOBS) capabilities, which were able to position nuclear weapons in space without, strictly, putting them “in orbit”;169 or initial early-2010s regulatory responses to drones or self-driving cars, which ended up operating on under- and overinclusive definitions of these technologies.170
Given this, the aim should not be to find the “correct” metaphor for AI systems. Rather, a good policy is to consider when and how different frames can be more useful for specific purposes, or for particular actors and/or (regulatory) agencies. Rather than aiming to come up with better analogies directly, this focuses regulatory debates on developing better processes for analogizing and for evaluating these analogies. For instance, such processes can depart from broad questions, such as:
- What are the foundational metaphors used in this discussion of AI? What features do they focus on? Do these matter in the way they are presented?
- What other metaphors could have been chosen for these same features or aspects of AI?
- What aspects or features of AI do these metaphors foreground? Do they capture these features well?
- What features are occluded? What are the consequences of these being occluded?
- What are the regulatory implications of these different metaphors? In terms of the coalitions they enable or inhibit, the issue and solution portfolios they highlight, or of how they position the technology within (or out of) the jurisdiction of existing institutions?
Improving these ways in which we analogize AI clearly needs significantly more work. However, it is critical that we do so to improve how we draw on frames and metaphors for AI and to ensure that—whether we are trying to understand AI itself, appreciate its impacts, or govern them effectively—our metaphors aid rather than lead us astray.
Conclusion
As AI systems have received significant attention, many have invoked a range of diverse analogies and metaphors. This has created an urgent need for us to better understand (a) when we speak of AI in ways that (inadvertently) import one or more analogies, (b) what it does to utilize one or another metaphor for AI, (c) what different analogies could be used instead for the same issue, (d) how the appropriateness of one or another metaphor is best evaluated, and (e) what, given this, might be the limits or risks of jumping at particular analogies.
This report has aimed to contribute to answers to these questions and enable improved analysis, debate, and policymaking for AI by providing greater theoretical and empirical backing to how metaphors and analogies matter for policy. It has reviewed 5 pathways by which metaphors shape and affect policy and reviewed 55 analogies used to describe AI systems. This is not meant as an exhaustive overview but as the basis for future work.
The aim here has not been to argue against the use of metaphors but for a more informed and reflexive and careful use of these metaphors. Those who engage in debate within and beyond the field should at least have greater clarity about the ways that these concepts are used and understood, and what are the (regulatory) implications of different framings.
The hope is that this report can contribute foundations for a more deliberate and reflexive choice over what comparisons, analogies, or metaphors we use in talking about AI—and for the ways we communicate and craft policy for these urgent questions.
Also in this series
- Maas, Matthijs, and Villalobos, José Jaime. ‘International AI institutions: A literature review of models, examples, and proposals.’ Institute for Law & AI, AI Foundations Report 1. (September 2023). https://www.law-ai.org/international-ai-institutions
- Maas, Matthijs, ‘Concepts in advanced AI governance: A literature review of key terms and definitions.’ Institute for Law & AI. AI Foundations Report 3. (October 2023). https://www.law-ai.org/advanced-ai-gov-concepts
- Maas, Matthijs, “Advanced AI governance: A literature review.” Institute for Law & AI, AI Foundations Report 4. (November 2023). https://law-ai.org/advanced-ai-gov-litrev