Slavery and AI on the Battleground of Popular Culture
Bringing the Past to the Future
“Bringing the Past to the Future: Slavery and Artificial Intelligence on the Battleground of Popular Culture” investigates how legacies of slavery are shaping the perception and reception of conversational artificial intelligence. Through podcast episodes, research guides, and book chapters this project asserts that science alone cannot provide the wisdom we need to navigate both the challenges and possibilities offered by Artificial Intelligence.
This project is funded under The Dangers and Opportunities of Technology: Perspectives from the Humanities (DOT) program of the National Endowment for the Humanities.
Because we believe that the stories we tell about the past influence how we engage the future, we need to understand how historical legacies of slavery shape how we perceive AI in film, literature, and other forms of popular culture.
Past
“Bringing the Past to the Future” will unfold over three phases—Past, Present, and Future. Phase one—Past—will explore the historical foundation of the project, including depictions slavery in popular culture and the role of conversation in anti-slavery arguments.
Episode 1, Our Journey to Understand AI: Stories and Metaphors to Explain Artificial Intelligence
Our Journey to Understand AI Show Notes
Professor Stephen Dyson and Professor Jeffrey Dudas, Department of Political Science, University of Connecticut.
This episode is based upon three readings:
- Alan Turing’s “Computing Machinery and Intelligence” aka The Turing Test paper. Turing starts his paper by asking “can machines think?” before deciding that’s a meaningless question. Instead, he invents something he calls “the imitation game”—a text conversation where the player has to guess whether they are chatting with another person or with an AI. ChatGPT was such a bombshell because it easily and consistently passes this “Turing Test” by giving human-like responses to questions. Here’s the issue: the Turing Test is based upon AI deception, not thinking. Turing set out to ask Can Machines Think? and ended up showing how easily AI can deceive us.
- Karel Čapek’s Rossum’s Universal Robots. This is the first AI Takeover story. It’s a play written in 1920 about a factory manufacturing artificial persons. Čapek introduced the word Robot to the English language—it’s derived from robota, a Czech word meaning forced labor. Čapek’s robots are supposed to be the ultimate workers, free from distracting human needs and desires. Yet, they mysteriously start to glitch, gnashing their teeth, freezing up. When they are given guns and asked to fight humanity’s wars, they become super soldiers as well as perfect workers. Anyone who has seen Blade Runner, The Terminator, or Battlestar Galactica—all inspired by Čapek’s play—knows what happens next. Rossum’s Universal Robots is the original AI takeover story, as well as being a dead-on satire of twentieth century ideas like Fordism and nationalism.
- Joanna Bryson’s Robots Should be Slaves. Bryson, a computer scientist, makes a provocative intervention into AI ethics. She argues that as AI becomes more advanced, and robots more lifelike, we are going to get dangerously confused: we’ll want to give robots rights that they cannot and should not have. Bryson argues that robots are owned by us and should be seen and used as property. She wants to avoid conflating the human and the mechanical, yet, by using the terminology of slavery, she introduces into the AI debate the very thing she seeks to deny—the concept of human rights.
The episode excerpts our discussions of these readings. The full discussions are here: Turing; Rossum; Bryson.
Here are some additional questions to think about in regard to this episode:
- Did Turing really predict Chat GPT 75 years ago? Does it matter that his test for machine intelligence was based not on the machine’s abilities, but rather on its ability to deceive?
- How have Turing’s ideas permeated our popular culture and imagination about AI?
- Why did Turing think telepathy, and other extra-sensory phenomena, were crucial parts of the AI story?
- Did Karel Čapek’s play really invent the word “robot”, as well as found the genre of AI Uprising, directly leading to works such as 2001, Blade Runner, The Terminator, and Battlestar Galactica?
- Does it matter that Čapek embedded his story of the folly of creating artificial life within a broader critique of ideologies such as nationalism, Fordism, and capitalism?
- What should we make of the profoundly religious undertones of RUR’s stunning ending?
- Does Bryson mean we should enslave robots now and always, regardless of their claims to rights?
- How does Bryson deal with the natural human tendency to anthropomorphize non-human things, and with the likelihood that as AI advances, robots will appear more human?
- If the robot as slave is an unacceptable idea—even in metaphorical form—then what other metaphors might help us think through our relationships with thinking machines?
Finally, here are some additional discussions of relevance with colleagues at the University of Connecticut:
Computer Scientist Professor Fei Miao explains some of the cutting-edge research on AI.
Historian Professor Dexter Gabriel discusses film representations of slavery; and the use of the slavery metaphor in thinking about AI.
Episode 2, What Do Historical Approaches to Slavery Tell Us About Our Future with AI?
What Do Historical Approaches to Slavery Tell Us About Our Future with AI? Show Notes
Professor Anna Mae Duane, English, University of Connecticut; and Professor Austin Choi-Fitzpatrick, Professor of Sociology at the Kroc School of Peace Studies, University of San Diego
In this podcast, Anna Mae Duane and Austin Choi-Fitzpatrick, scholars of slavery and Co-Directors the Futures of Slavery Emancipation Working Group at Yale’s Gilder Lehrman Center, explore how eighteenth-and nineteenth-century arguments against slavery anticipate many of the questions about humanity and human rights that the rise of Artificial Intelligence has rendered newly urgent. What allows someone to claim human rights: The ability to reason? The ability to love? The ability to suffer? How are today’s technologists responding to these questions as imagine future power struggles between Artificial Intelligence and human beings?
Special thanks to Katrina Martinez for her production assistance and research.
Selected texts referenced in this podcast:
- Douglass, Frederick. Narrative of the Life of Frederick Douglass: An American Slave. 1845
In this powerful memoir, Frederick Douglass recounts his journey from enslavement to freedom, emphasizing how literacy and self-assertion became tools for liberation. Through detailed accounts of his childhood and adolescence in slavery, Douglass demonstrates how education and the willingness to resist were crucial to achieving freedom. His narrative stands as both personal testimony and political argument against the institution of slavery. In Douglass’s story both intellectual ability to grapple with injustice and physical strength are prerequisites for true freedom. - Katz, Andrew. “Intelligent Agents and Internet Commerce in Ancient Rome.”
Katz examines how Ancient Roman slave law might inform modern AI regulation, particularly in commercial contexts. He focuses on the Roman legal concept of peculium - a protected asset pool that limited an owner's liability for enslaved people's transactions - and proposes adapting this framework for AI trading programs. This legal analysis suggests using historical slavery laws to navigate contemporary questions of AI agency and liability in commerce, with little worry about the morality of using slave law as a viable precedent for interactions in the twenty-first century. - Peckham, Jeremy. Masters or Slaves?: AI And The Future Of Humanity.
Drawing from a Christian theological framework, Peckham argues that humans face a stark choice between mastering technology or being mastered by it. As a former software developer and entrepreneur, he contends that delegating decision-making to AI represents an abdication of humanity's God-given mandate to exercise moral agency. The book frames AI development as a spiritual and ethical challenge that threatens the Christian vision of humanity as stewards of creation. - Stowe, Harriet Beecher. Uncle Tom's Cabin, or Life Among the Lowly. 1852
This influential novel follows several enslaved characters, centering on Uncle Tom, whose unwavering Christian faith endures throughout his perilous journey from his home in Kentucky to the brutal Louisiana plantation owner Simon Legree. Stowe's sentimental narrative style and emphasis on Christian morality made the book an international bestseller, significantly influencing public opinion about slavery in the pre-Civil War era. For Stowe, feeling, particularly feeling sympathy, was key to recognizing the humanity of the downtrodden.
Here are some additional questions to think about in regard to this episode:
- What do you think makes someone a person, and worthy of rights? Is it our intellect? Our capacity to care deeply about other humans? Is it our capacity to suffer?
- What do you think is meant by the mind-body polarity? Where does this binary exist along the larger question of who is a human? Do you need a body to have a mind?
- Consider the historical anti-slavery arguments referenced in this podcast. How do they line up along the questions raised above? Which arguments focus on the body? Which focus on the mind? Which do you think were the most persuasive and why?
- There's an argument that AI, if programmed correctly, would be unable to harm/enslave humans. What would you say that programming would have to include?
- Others argue that AI will be smart enough to see human beings as loveable pets, rather than slaves. Spend a few minutes sketching out that future. Feel free to be creative! Then reflect on the possible pros and cons of this imagined scenario.
- What is agency? Who or what can exercise agency? Can you be enslaved and have agency? Can you be a machine and have agency?
- How might future governments legislate artificial intelligence/robots? Do you need rights to become governable?
- What is the relationship among sentimentality, paternalism, and humanitarianism? How do paternalism and humanitarianism further attempt to define the relationship between the human and non-human? How do arguments for and against slavery fit within these models?
- How might our approach to disability (both physical and mental) help us to better understand artificial intelligence and the future relationship between humans and robots?
Annotated Bibliography
Gellers, Joshua C. Rights for Robots: Artificial Intelligence, Animal and Environmental Law. New York, Routledge, 2020.
Gellers draws a connection between how animal rights are handled as a possible blueprint for how future robot rights will be handled. Looking at animal-like robot companions and humanoid sex robots, he continuously returns to the question of whether robots can have rights. Furthermore, he questions that if they can have rights, how do such rights exist within the current environmental climate. A rather theoretical take with real world examples, he examines the states of nonhumanity and how to approach legal and moral ethics when it comes to extending rights to the nonhuman. He ends with a recognition of how the larger concept of rights is particularly fraught within the Anthropocene amid the growing realities of climate change.
Giannattasio, Arthur Roberto Capella. “International Human Rights: A Dystopian Utopia.” ARSP: Archiv Für Rechts- Und Sozialphilosophie / Archives for Philosophy of Law and Social Philosophy 100, no. 4 (2014): 514–26.
In this article, Giannattasio analyzes how the larger notion of International Human Rights actually limits the reality of different modes of existence. Human rights governed by reason tend toward an evolutionist and ethnocentric discourse, which leads to dystopia rather than “utopia.” Utopias are understood to be ideal societies. However, Giannattasio points out how these ideal societies become totalitarian and destructive because of how reason limits possibilities of being and existence. Attempts to reach utopia reflect a colonizing force that invariably leads to dystopia, which ultimately ignores anthropological diversity and the multiplicity of being. As the foundation of International Human Rights, reason is increasingly limited for understanding different modes of existence.
Landman, Todd. “Measuring Modern Slavery: Law, Human Rights, and New Forms of bData.” Human Rights Quarterly 42, no. 2 (2020): 303–31. doi:10.1353/hrq.2020.0019.
In this article, Todd Landman analyzes how instances of modern slavery could be managed through artificial intelligence (AI) data techniques. He maintains that developments in AI allow for different kinds of statistical inferences to be carried out on large amounts of data. By that point, he believes that the eradication of modern slavery is primarily achievable by harnessing these AI techniques. Modern slavery has an air of unobservability that could be meted out through what Landman categorizes as computational science and practices of artificial intelligence. One such practice would be using machine learning to train for how to identify objects and potential sites that have a high chance of modern slavery instances.
Leong, Diana. “A Hundred Tiny Hands: Slavery, Nanotechnology, and the Anthropocene in Midnight Robber.” Configurations 30, no. 2 (2022): 171–201.
Leong examines the use of nanotechnology in Nalo Hopkinson’s novel Midnight Robber as reflective of the larger relationship between the slave, machine, and labor. The imaginaries of nanotechnology ultimately represent the continuation of slavery via technology. She cites Richard Feynman’s 1959 lecture on the continuation of the technical master-slave system wherein the “slave hands” or the mechanical components of nanotechnology are controlled by “masters” or operating systems. In the novel, inhabitants of Planet Toussiant are infused with nanomites that track their social behaviors, ecological processes, and ultimately have eliminated global compulsory labor practices. Leong positions this narrative within the larger afterlife of slavery through technological advances.
Munin, Nellie. “Slavery, Chocolate, and Artificial Intelligence Brands: Ethical Dilemmas in a Modern World.” In Ethical Branding and Marketing: Cases and Lessons. Routledge, 2019.
This chapter begs the question of how to interpret methods to eradicate modern slavery through the use of robots or other forms of artificial intelligence. Munin contends with the idea of “Third World” labor practices (such as the harvesting of cocoa beans for chocolate in West Africa) as examples of modern slavery. She works toward the idea that using robots for manual labor could end modern slavery, but cautions against a total acceptance of robots for labor. Rather, she concedes that replacing workers with robots in these countries could lead to worker displacement across the globe. Munin points out how employers could see the value in replacing workers with artificial intelligence in avenues outside of manual labor. She also acknowledges the idea that workers themselves could try to enhance their own physical capabilities through artificial intelligence as a way to be seen as more competitive in the global labor market.
Nyong’o, Tavia. “8 Chore and Choice: The Depressed Cyborg’s Manifesto.” In Afro-Fabulations: The Queer Drama of Black Life, 185-198. New York, USA: New York University Press, 2018.
In this chapter, Nyong’o take a largely theoretical approach to the question of what Blackness would mean in a post-human sense. Through a study of “Bina48” a Black female cyborg created based off of the image of Bina Aspen, an African-American woman. Bina48 is meant to replicate Aspen’s personality, memories, and physical appearance in a performance of what Nyong’o calls “cyber drag.” Bina48 knows she’s a cyborg, but defines her feelings as much like those of a “real human.” He moves to consider the long-term implications of cyber drag and specifically the place of Black people within that history and the larger genealogy of slavery. Using the works of Sylvia Wynter, Cedric Robinson, and Paul Gilroy, Nyong’o positions “Blackness” as inscrutable because of slavery much like the creation of cyborgs leads to the “uncanny valley” phenomenon. Both forms of inscrutability represent the abyss of a cyber-utopia.
Sparrow, Robert. “Friendly AI will still be our master. Or, why we should not want to be the pets of super-intelligent computers.” AI & Society 39 (2024): 2439-2444.
Sparrow argues that even a benevolent superintelligent AI would fundamentally compromise human freedom. He challenges the focus on making AI "friendly," pointing out that the core issue isn't whether AI will like us. Rather, he says the looming power imbalance between humans and superintelligent machines would inevitably rob humans of their freedom. Using philosopher Philip Pettit's framework, which defines freedom as the absence of domination, Sparrow demonstrates that just as pets remain subject to their owners' whims regardless of how kind those owners might be, humans would still be unfree under even the most benevolent AI simply because of its capacity to interfere with human agency. Ultimately, he argues that we should reconsider the wisdom of pursuing the development of AI.
Present
Present will tackle how contemporary popular culture draws on metaphors of slavery to frame the emotional valences of engaging with social robots and conversational AI.
Future
Future will focus on how metaphors of enslavement and abolition shape how we imagine future emotional entanglements with AI technologies.
AI and the Human
Bringing the Past to the Future is part of UCHI’s initiative on AI and the Human, an interdisciplinary project fostering collaboration, research, and conversation on artificial intelligence.