A Glossary for Human-Centered AI
Reading Between the Lines
“Reading Between the Lines: An Interdisciplinary Glossary for Human-Centered AI” brings an international cohort of humanists, engineers, and scientists from UConn and the Université Internationale de Rabat into conversation through an in-person symposium and a series of podcast dialogues illuminating how the definitions of terms associated with Artificial Intelligence vary widely by discipline, location, and language. The symposium and the podcasts will be structured to address the challenges that language and translation (both conceptual and linguistic) pose to collaboration on AI research.
This project is funded by The Consortium of Humanities Centers and Institutes’ Human Craft in the Age of Digital Technologies Initiative. It is part of UCHI’s initiative on AI and the Human, an interdisciplinary project fostering collaboration, research, and conversation on artificial intelligence.
We often use the same words—like ‘learning’ or ‘intelligence’—when we are talking about AI, but what those words mean depend on our own academic and cultural background and the assumptions that accompany them. The humanities bring crucial insights about language and meaning that can help us to engage these gaps in constructive ways.
Podcast
“Reading Between the Lines” will explore questions of language and AI through four podcast episodes, each on a specific theme like education or justice.
Episode 1, Justice
Powered by RedCircle
Can artificial intelligence one day help to mitigate systemic inequality? As part of UConn's Human-Centered AI Initiative, we've brought together a roundtable of scholars from Connecticut to Morocco define “justice” within their discipline. It's only by understanding what we mean by the word justice, that we can begin to work together to deploy AI to create a more just and equitable world.
UConn Humanities Institute and professor of English, Director Anna Mae Duane leads a conversation about AI and justice, featuring Meriem Regragui, professor of law, Université Internationale de Rabat; Ting-An Lin, assistant professor of philosophy, University of Connecticut; Avijit Ghosh, applied policy researcher at Hugging Face and associate researcher at Riot Lab, University of Connecticut.
Episode 1 Transcript
Welcome, everyone, to our first podcast in a series dedicated to reading between the lines. “What are we talking about when we talk about artificial intelligence?” This series of podcasts is a collaboration between the University International de Rabat in Morocco and the University of Connecticut here in the U.S. It is the result of a Mellon-funded CHCI grant, which we’re putting the humanities and the sciences and law and medicine in the same room to think about what do the terms mean, when we are thinking about AI. In some ways, this is an anti-glossary. If a glossary is a place where we all decide on one definition and go from there, we are much more interested in leaning into the fact that we all have different definitions of terms that people feel are self-explanatory. And if we’re going to really have an expansive, inclusive, liberatory, emancipatory AI, which is all of our goals, we need to start with how do we talk about it. So I will introduce myself. I’m Anna Mae Duane, Director of the Humanities Institute at the University of Connecticut. And I’m going to ask our esteemed guests to introduce themselves briefly. Meriem, would you?
Thank you, Anna Mae. I’m Meriem Regragui, professor of law at the International University of Rabat, the school of law, and the deputy director of the school for global studies. I’m specializing in contracts and consumer law and with specific focus on social justice.
Hello everyone my name is Ting-An Lin, I’m an assistant professor of philosophy at the University of Connecticut. Before joining UConn, I was an interdisciplinary ethics postdoctoral fellow at Stanford. My specialty are in ethics, social political philosophy, and feminist philosophy, with a special focusing on how to address the unfair constraints that the social technical structure impose on different groups of people.
Hi everyone, I’m Avijit Ghosh. I’m an applied policy researcher at Hugging Face, where I focus on AI evaluations and US AI policy. And I’m also an associate researcher at the Riot Lab at UConn, where I work with Dr. Shiri Dori-Hacohen on a few different topics, ranging from morally aligning AI models and all the way to information access on social media and how that impacts fairness and ethics. Before coming to UConn, I was a PhD student at Northeastern, where my focus was on how fair models break down in the real world because of lack of information. So this is a very interesting topic to me.
Professor Duane: I should say our topic today is justice. Again, a word that has its own reverberations in law and computer science. I’m a literature professor. So I’m going to start just by jumping in and doing the definitional work. And so I’ll just ask everyone in a minute or two, how do you define justice in the work that you do and the discipline that you’re in? And I’ll just start as someone who has taught African-American literature and works in disability studies. “Justice” often has this connotation of repair, that you need sort of this historical background to bring to the present day to truly achieve justice and fairness, because we need to understand the injustice of the past to really engage with justice right now. And I’m just going to go around the people I’m seeing. So, Meriem, would you?
Professor Regragui: I think one of the major challenges for us is defining “justice” because it’s a concept that lies in the fact that it must be grounded in a moral or in a ethical framework, which can vary significantly across cultures and belief systems. Yet to despise these differences, a common thread can emerge and we can see that justice is commonly referred to as this mindset of this argumentary around the fundamental objectiveness to giving each person what they deserve, so basically the way I see it. And on the other hand, we can see it also as an institutional dimension, that is with the way that states enforces and organizes the rights and obligations of its citizens and maybe also in another way in another dimension, the one with the protection, as a social justice it’s more precise as we can maybe see it later how John Rawls have seen it, but we, I like the definition that has been done by Professor Yuko Itatsu, she’s a professor at the Tokyo University and she describes social justice as the mindset that’s driven with the assumption that all people regardless of their upbringing or their circumstance should be able to live with human dignity, so this is the emphasis on dignity, and which I found really crucial. Professor Duane: I love that, that dignity is this cute word that’s emerging. Um, Ting would you mind weighing in as a philosopher?
Professor Lin: Sure, so in the discipline of philosophy, uh, which is my main area, the discussions surrounding justice used to be dominated on this perspective or approach to propose theories of justice that based on the more ideal scenarios. However, over the past few decades, feminist scholars and critical race theorists have criticized such a tendency. So this suggests that people like philosophers, like philosophizing from their armchair, they have spending so much time and just try to debate with each other about what a more ideal account of justice will look like. But our world is not like that. Our world is more a non- ideal. And so instead, they suggest that we prioritize focusing on and investigating some of the real world issues of injustice. People might disagree of what justice look like, but it’s easier to find and identify. And we agree that those are issues of injustice that we should pay attention to. So my research also, like, take this kind of perspective and one of the notions of injustice that I engage more deeply is this idea called structural injustice, which is coined and made popularized by Iris Marion Young, who is a feminist scholar and political theorist. According to Young, structural injustice is a type of moral wrong that’s imposed by social structure, or currently we might describe that as a social-technical structure. Structural injustice will exist when the impacts of the social-technical structure systemically impose some of the undeserved burdens, and burdens we can understand that very broadly, including like oppression, violence, or exploitation, etc., that impose those kind of burdens on some groups of people and while at the same time comfort unearned benefits or privilege to others. So according to this view of structural injustice, the goal is to address those kind of unfair constraints that social structure imposed on people and then through some of the collective actions to address that to make it like less unjust in the future.
Professor Duane: I love it Avijit, as our resident computer expert, tell us a little bit about how you see justice in your work.
Dr. Ghosh: Yeah, I’m glad you specifically said that because I was also going to mention that in the spirit of this being an anti-glossary, I would like to point out that legal definitions of justice, which again vary differently from everyone’s moral definition of justice. So for me, I think social justice equates to what we understand to be principles of equity, everyone should have the same opportunities as much as possible. Society should make sure that we should have similar outcomes if we put in similar efforts. And this is also coming from the fact that I grew up as a queer person in India, so like I didn’t and still do not have the same opportunities available as you know heterosexual people do in terms of marrying or having kids and property rights and whatnot, but that you know is very different from technically applying the concepts of equality in machine learning. And there are certain things that you can’t even do. Like, for example, it would be considered morally and ethically wrong to use machine learning to identify if somebody is queer from images of their faces. That came from a very controversial Stanford study, which, somebody tried to build a gaydar and for various other reasons. So even if you start off with a goal to make sure that you are scanning people’s faces to make sure that queer people in your study are getting the same outcome as straight people, in that process you might be violating other aspects of their existence like inadvertently outing them to their workplace which might have other impacts. So you know again applying certain notions of social justice might not even be possible or feasible without self-identification of the sort, which is a problem I ran up against a lot during my PhD. So like I tried to measure whether fair ranking was actually working in practice using LinkedIn’s algorithm. And I realized that, even though the algorithm’s mathematical definition made sure that people from different subgroups, like different races and genders would be equalized in terms of their, their relative ranking in the end, that kind of re-ranking was not even possible if people did not self-identify their race or gender. So my understanding is that companies were using people’s faces or their last names or their zip codes to sort of identify demographic information about them, which was not entirely correct and was misidentifying people. And finally, we come to the different definitions of equality or what different notions of justice look like in different judicial systems, which sort of then influence also the technical definition. So, for instance, in the US, you have the notions of, you know, you can’t do something that is called “race- norming” because the Supreme Court has disallowed it. So in the process of ensuring fair outcomes, you cannot treat people differentially. That has been established by the Supreme Court. But then in other jurisdictions like in India, for instance, you are allowed quotas for like minorities, for women, for people from the scheduled caste, scheduled tribes, and that is in the constitution, and therefore, any sort of like, you know, resource apportioning algorithm would want to take that definition in the law into account. And so writing a fair algorithm that supports Indian law looks very different from the one that follows US norms. Um, so it’s it’s very different based on where you are, based on what your local norms are. Um, I personally think that algorithms should not be you know copy pasted across borders, it should be context dependent, and that context should also include the laws of the country that you’re operating in.
Professor Duane: Oh that’s so important and you just led us to a beautiful segue because another part of our anti- glossary here is that we have folks from all over the world, this is a collaboration. I’m sitting here, I’m at the University of Connecticut, but I’m sitting in Morocco. And how the legal structures interpret justice completely differently, as we add on the sort of social, cultural, individual aspects, we have sort of completely different laws. And to bring this to AI, as we’re thinking, as you already spoke so eloquently about, Avijit, uhm so often, I think AI, or at least the way we think about it, it’s a largely Western construction and Western ideas of justice itself. And just to sort of pull together what folks were saying, I think there’s a Western bias in AI as we’re trying to develop regulations, as we’re holding new platforms, um thinking about, um the ways we just, and, and uh Ting spoke to this and Meriem spoke to this, and Avijit your point about this idea that if we treat everyone the same, if we pretend if everyone is equal then that becomes justice and uh certainly with AI, there’s this tendency to flatten folks, right, that there’s this data and that it’s, it’s you know statistically what’s most likely, is what’s going to be the output. And your example was so perfect, Avijit, how do we take in the diversity and the fact that what is justice for one person is not justice for another person and move that into our programming. And I think, Meriem, you mentioned John Rawls and just to sort of tie some of these things together, he talks about the “veil of ignorance.” Imagine as if you have no identity, which already starts us off, I think, at a disadvantage. Or just imagine yourself in several different identities and what would you want for that person and it’s, it’s sort of this act of empathy, but you’re also imagining what that person would want of course always from your own position possibly of privilege. And he suggests that like, if we don’t, you know if we don’t have any position, then we will choose justly for everyone and I think what I’m really interested in thinking about is, what do we do with this idea when it comes to artificial intelligence, in terms of property rights, in terms of social justice, because I do think, at least on the technical side, there can be this presumption that we’re doing things equally, that everyone is being coded equally, it’s the most likely outcome. If we have the majority thinking one way, or most likely to react one way, then that’s how AI is coded. And then these gestures towards diversity can wind up actually outing people or sort of making them an anomaly. So I, I wanted to think about AI from two different ideas of justice as well. Martha Nussbaum thinks about the social contract as basically flawed because it imagines there’s two equal people, uh right, and you make a contract because I can hurt you, you can hurt me, I have something you want, you have something I want. But of course, right, that’s not how society functions. And I think in terms of as we’re thinking about justice and artificial intelligence and large language models, how do we imagine the ways in which we’re not equal, right, in which that social contract doesn’t work as a way to think about how we program, how we elicit. And I think particularly, I’m wondering, I am sitting here in Morocco, and you’ve already spoken about India, Avijit. What happens when we’re dealing in a global context in which people do not have the same aspects of access, literacy, and then, of course, the default that so much of artificial intelligence is thought about and programmed, and the data comes from English-speaking societies. So how do we sort of accommodate this, which everyone already kind of spoke to really beautifully, this idea of we need to realize that there isn’t equality, that we’re not all data bits that can be, you know, interchanged while sort of avoiding what Avijit was talking about, in which people are being sort of either sidelined, sort of an exception case to this generalized intelligence. There’s this dream that there’s going to be this universal intelligence that’s going to accommodate all of human nature equally and fairly. We have our doubts, and so I want to just throw one more key word into the mix and then sort of ask your thoughts about it. “Decolonial AI justice,” which challenges, you know, both Rawls and Nussbaum in some ways. And I’m going to hand this over to the experts in the middle. My understanding of it is that AI systems are no different, unless we’re very careful about it, than other systems in ways that are going to just reinstitute, perpetuate historical imbalances that were rooted in colonialism. And so one argument is that we need to de-center Western ideals of justice, Western ideals of fairness and sameness, and include other philosophies, other epistemologies. From the very beginning, our colleague, Shiri Dori-Hacohen, talks about expansive AI, that we need different perspectives, not just disciplinarily, but globally from the very beginning as we’re programming AI. So I’m wondering for each of you, how in your own work, are there hopeful stories of ways in which you see really exciting moments in which decolonial justice is operating, or trying to be implemented, or any challenges you see with this sort of more expansive vision of justice? I was just so struck by the way when you were all defining your own visions of justice, everyone had this, we think it’s one thing, we think it’s equality, but if you really want justice, you have to acknowledge that we’re not starting at equal moments, that we are not the same, and that actually the same result is going to be injustice for some people. So I’m wondering in your own work, how are you trying to implement this version of justice? And I’ll start with Meriem again.
Professor Regragui: Okay, well so, the term justice, as it is so polysemous, in my research, I often try to identify its presence across a wide spectrum of legal phenomena. And I began by exploring how justice manifests in contract law, primarily through the notion of contractual balance, which refers to the fairness between contracts and parties in terms of rights and obligations. And I discovered that in certain legal systems, notably the civil law tradition rooted in the Romano-Germanic model, balance is not considered a condition for the validity of a contract. And in contrast, common law systems, especially in the US case law, they developed a concept quite foreign to the civil tradition, which is the doctrine of unconscionable contracts. And I found it very interesting because this doctrine allows judges to intervene when a contract is significantly unbalanced or contains an abusive clause in order to restore a sense of fairness. However, in our civil law system, judges remain bound by the letter of the law and the sanctity of the party’s agreement. They are not empowered to question the substantive fairness of the contract. And contractual freedom is treated as a foundational principle of what two parties agree to, only they can undo. So I found in common law an instrument that we lack presently, this is the legal mechanism for ensuring contractual justice. But contractual justice cannot be reduced to mere balance between contracting parties narrowly defined. In reality, contracts produce externalities, both positive and negative. And some agreements such as employment contracts, rental agreements, or medical consent forms can have life-altering consequences. These are not just economic tools, but social institutions. Their impact extends beyond the two parties involved and disproportionately affects the more vulnerable party. And in this sense, certain contracts are key pillars of socioeconomic stability or conversely, vectors of inequality. So this is precisely where contractual justice meets social justice. Now, what does this have to do with AI? And in my point of view, recent technological advances offer us an opportunity to rethink how fairness is built into contracts upstream. I believe AI could be used to design more equitable contracts by default, whether it is in B2C or B2B context. If AI can help us detect imbalances, flag harmful clauses, and it will simulate social impact before a contract is signed, we could mitigate systemic injustice at its source. This could profoundly affect the socioeconomic status of vulnerable populations and move us closer to a more socially and economically just society. So in response to the question of what other definitions of justice we should consider, I would suggest that the contractual justice that is ensuring that the performances exchanged between parties are fair and reasonable, it must be part of our discussion. And moreover, we must also acknowledge its direct and indirect relationship with broader social justice, especially when designing AI tools that increasingly influence our legal and economic environments.
Professor Duane: Thank you so much. If I could just ask one follow up question. I do wonder how AI can spot these vulnerabilities in terms of who’s programming it and sort of what, how do we sort of identify what flags to look for? Because, I mean Avijit’s example is like, that could go wrong in many ways, right, that you’re flagging that this is unbalanced based on you know identifiers that maybe aren’t relevant or maybe actually do harm. So I’m wondering if you’ve heard examples of that as a challenge or ways people look to deal with it.
Professor Regragui: Absolutely. Absolutely, it’s a great challenge because it depends on how we program the AI and how we use it. So, since we are still attached to our old notion of fairness, old notion of contractual justice, nothing will change, even if AI evolves, even if we are currently experiencing a great development of technology. But on the contrary, if we tend to change the way we program AI in order to aim towards this social justice and this contractual justice, maybe we will have the capacity to have a more fair contract between, let’s say, between employers and employees, between the medical staff and the patients, etc. So, it has to be thought beforehand, because it’s the only way I see we can provide a more balanced and fair contracts in the market, in a larger way and specifically between particulars.
Professor Duane: Yes. Oh, thank you. I know both Ting and Avijit have thought about exactly these issues. Well, whoever wants to jump in first, both sort of your ideas of de-centering Western justice in AI, but also responding to Meriem’s like really excellent point, abut how do we achieve balance and identify vulnerabilities?
Dr. Ghosh: I mean, I can take a stab and I’m sure Ting’s perspective will come from a completely different angle, so we’ll compliment each other. For me, machine learning research has been bottlenecked by availability of resources for the longest time. So, for the longest time, their talent is spread everywhere across the world, like I have no doubts about that. What I do see as a problem is that GPU access, for instance, is concentrated in American and some Western European UK institutions, meaning that even if we all know that data sets are heavily Western biased, it’s not like something we feel is true, in fact, there has been systematic studies. So, I encourage the listeners to look up the Data Provenance Initiative. It’s a project by my friend Shane at MIT who, they like went through thousands and thousands of data sets and, and plotted a world map and most machine learning data sets are coming from the Global North. So, when we do know that it’s true, how do we alleviate that? Thankfully this year, I mean I would say like starting 2024 onwards, there has been a slow but steady move of decentralization of both data sets and models and in fact in 2025, we are seeing an innovation, you know, boost towards smaller and smaller machine learning models that run locally on your computer or that runs on GPU resources that are relatively cheaper and can run on existing hardware. As a consequence, we are also now seeing models come out of our fine-tuned models, come out of say India, come out of Middle East. There is a project out of Dubai, although Dubai is not a great example because they are not resource poor. But again, we are seeing initiatives like the Falcon family of models coming out of the Middle East. We are seeing different Indian institutions producing their own models. China has been producing a lot of cheap models that run locally on your computer. So I would say that resource access equalization has been a big bottleneck that is starting to be improved. So, the world now doesn’t constantly depend on what OpenAI will do and let that dictate the fate of the AI research field. And consequently, anyone who is impacted downstream, people are taking matters into their own hands. More and more public institutions are releasing AI-ready data sets. So there has been initiative from different governments, US departments, so NASA releases data on Hugging Face, where I work, for instance. But also now we are seeing a tendency for international government. So the National Library of Norway releases data on Hugging Face. You have the French Ministry of Culture that released a very detailed French language conversation data set that can help models converse better in French. And I’m hoping that with the release of more open source scripts to do so, more and more small organizations and institutions across the Global South will now be able to contribute in the same way, that’s part one of the equation. Part two is better and more open evaluations. Again, like circa 2023 when [Chat]GPT first came out, well, 2023 when [Chat]GPT first came out, people didn’t really know what the model had been tested against before release, and for-profit companies don’t really have an incentive to do so. Like, they want you to use their model, they will not actively talk about the fact that it has negative impacts or negative evaluation results. With the advent of open source models catching up to closed source models, there’s also been a rise of an incredible diverse evaluation testing system. Evaluation and, and evaluation like just a testing ecosystem. So you have leaderboards, you have benchmarks coming out of different companies, including like in Indian languages, including like say, I think they’re like misinformation, detecting data sets in Arabic, for instance, which is incredibly useful. And these are coming out of small local organizations that can now do this type of testing more easily on open data sets. And I think people are waking up to the fact that evaluations are extremely necessary for trustworthiness in models. I’m leading like this 150+ researcher coalition called the EvalEval Coalition, or critically evaluating evaluations itself, like the field itself, and we are seeing incredible progress there, so we are working on standardizing evaluation documentation, writing about eval science, like what constitutes a good evaluation, and just generally I think these types of evaluations of whether a model is biased, whether a model is, is doing the same things in all languages, whether the model is giving misinformation in like asking for legal outputs and whatnot, um uh we, our team at Hugging Face including external researchers released a dataset called Shades. So, I think some of us know already about the Gender Shades paper that was quite monumental in the ethics field. This is sort of following that same trend of releasing stereotype datasets, so we’re releasing stereotypes across different languages. So evaluations are the second piece of the puzzle.
Professor Duane: That’s fantastic, and we will add it to the show notes. We’re already building this incredible reading list. So what I’m hearing is that we need diverse people, diverse perspectives from the beginning and that’s just so exciting to sort of get that overview of the ways that that’s happening. Ting, I’m going to give you the last word as our philosopher, who thinks of all of life. So what would you add?
Professor Lin: Yeah no um, I think I will add something like, I mean as a philosopher I work more on the, kind of a theoretical side, but um I do want to, I really like the other previous points that every one of you that pointed out, and then tied it to Anna Mae’s original questions about how those different notions of justice, how do we see them? Are they competing, or how should we prioritize them and related to AI? And I wanted to propose this idea, as I mentioned earlier, that my work is hugely inspired by this notion of structural injustice. I really wanted to suggest that this perspective, when we think about social technical structure, and we understand that in a broader way, it opens up this opportunity to see these different notions of justice, not as competing to each other, but rather they are just like a complementing each other, and give us a more like a pluralist picture of both understanding justice, but also identifying potential aspects of injustice. So the core of structural injustice is about how different practices and how the social structure work, how they shape the power relations between people. And then AI now, as this kind of a very powerful emergent technology, it is not something that just like emerged, something like magically, right? It is situated there in our material world and then as Avijit pointed out, that it is shaped, it’s constrained by some of its materiality about what kind of data are there and how the labors are being currently, like, conditioned into play some role in shaping AI and so, I think that with the AI become like a more widely used in the global world, we can also think about how it’s like further deployment, shape the power dynamics. So I think that we can, based on this kind of a perspective and this picture, we can like remind us when we are thinking about different aspects of justice surrounding AI, we should not only be focusing on, for example, AI’s impacts in relation to how it shapes the resource distribution, which is this kind of a Rawlsian notion of justice, and that has been still, I think, a very dominant focus in the literature of algorithmic fairness or algorithmic justice. But, if we have this kind of structural perspective, we know that some other aspects, for example, how AI shapes our understanding of the world, or in other words how AI shapes the social schema or social meanings that we ascribe to different features or different groups, that also plays a huge role, and that echoes with like Avijit’s earlier point about, for example, how queer people or maybe like people of color are currently represented by the output, like the text or pictures that are generated by AI systems and that also can be shaped for example, just like very simply what kind of accents or writing styles, that say ChatGPT or like voice assistants, they adopted and how might the users, like interacting with that, shape our narratives or our values surrounding those like practices, which might seem simple, but many of our understanding of the world is like shaped by all this like, a daily, like trivialities. And relatedly, I think it also connects with the power relations that are not just focusing on the outcome, but also in the process behind AI designing. So, that’s pretty much related to the idea of like a decolonial philosophy and I wanted to point it out and shout out to many scholars in AI research, they like pioneer this idea or like, move forward with this idea of like a participatory design that ensure more democratic and equitable power relations and different stakeholders that in different parts of the world, then included their perspective for being like a well represented in a design and I think that’s not an easy question, but that’s the way to go.
Professor Duane: No, no, and that’s an incredibly rich answer and I’m just so inspired and thrilled by the ways that everybody sort of picked up one another and we could continue this conversation for another two hours I am sure, but we are unfortunately out of time. We will have a reading list that accompanies this, will be available to our readers. This is a rich and growing, uh space to be thinking about and a vital one. And I just would like to thank everyone for sharing your time and expertise as we sort of move forward with this emerging technology and where it’s going to take us. Thank you, everyone.
Episode 2, Education
Powered by RedCircle
What does it mean to be an educator and to educate in a world becoming reliant on artificial intelligence? This week our scholars take on the question of how AI will reshape the future of education across several disciplines including literature, law, and medicine. AI has already transformed how students learn in a world driven by efficiency. The question now becomes how educators will respond to the growing challenges and promises of an advanced technological world while elevating the unique powers of the human mind.
UConn Humanities Institute and professor of English, Director Anna Mae Duane leads a conversation about AI and education, featuring Najia Hajjaj Hassouni, dean of the College of Health Sciences, Université Internationale de Rabat; Ouassim Karrakchou, deputy director of TICLab and professor of computer science, Université Internationale de Rabat; Tina Huey, Interim Director of Faculty Development, Center for Excellence in Teaching and Learning, University of Connecticut; and Meriem Regragui, professor of law, Université Internationale de Rabat.
Episode 2 Transcript
Anna Mae Duane: Hi, welcome to “What Are We Talking About When We Talk About AI?” It’s a project that is a collaboration between the International University of Rabat in Morocco and the University of Connecticut. This project is a result of an idea we had that we need something akin to an anti-glossary when it comes to artificial intelligence. We’re all talking about it, but we’re all talking about it from different perspectives. And so some of these terms, whereas a glossary, we create one definition that everyone sort of agrees on, we’re leaning away from that because we think it’s important to understand that these key terms mean something completely different depending on what your perspective is, what your discipline is, and that we need to understand and incorporate this multiplicity of meaning as we move forward with this complex technology that’s taking us forward. And so today we will be talking about one word, “education,” which artificial intelligence is being hyped up in many, many realms. But we all know, everyone here in this room is an educator. And so we’re really aware of the ways we’re, perhaps, not prepared for how AI might change what we mean when we talk about the word education. And I’m very excited, I have a really rich panel of guests from all over the world, from different disciplines. and I’m going to just jump right in. I’ll introduce myself briefly. I’m Anna Mae Duane. I’m a literature professor and also the director of the Humanities Institute at the University of Connecticut. And I’ll ask my guests to introduce themselves.
Thank you so much. I’m Najia Hajjaj Hassouni, I’m the dean of the College of Health Sciences at UIR, Morocco. And I’m also the former dean of the medical school here at UIR and former dean as well of the medical school in Rabat.
Professor Duane: Thank you and welcome.
Hello, my name is Ouassim Karrakchou. So I have actually here, two hats. My first hat is a research hat. So I’m the deputy director of TICLab, which is the research lab of UIR specialized on AI applications. And I’m also, as an educator, I’m a professor of computer science at the School of Computer Science here at UIR. Thank you.
Professor Duane: We like people with a lot of hats. That’s kind of the name of the game. Let’s have a representative from the Nutmeg State. Tina, could you please introduce yourself?
Professor Huey: Absolutely. Thanks, Anna Mae, for inviting me to be part of this today. So at the University of Connecticut, we have a Center for Teaching and Learning, actually the Center for Excellence in Teaching and Learning, where I serve as the Interim Director of Faculty Development and lead AI and pedagogy initiatives. I’ve also taught writing in an academic context for more than 15 years. And I research the use of discussion activities to support student inquiry and critical thinking.
Professor Duane: Wonderful. Thank you. And Meriem, could you please introduce yourself?
Well, hi, everyone. I’m Meriem Regragui, professor of law at the School of Law at the International University of Rabat and Deputy Director of the Center for Global Studies, specializing in contract law and consumer law with a special focus on social justice.
Professor Duane: Thank you so much and we’re just gonna jump right in with the theme of this podcast, which is definitions or anti-definitions. And we’re just going to do a lightning round to start us off of how you define education. How does education or a definition of education that feels most essential to the work that you’re doing now-- what does it mean to educate in your field? And I’ll start again. I’m an English professor as well as a humanities director. And it’s become one way education, we sort of feel like we’ve done a good job, is if our students can write in a professional way and AI has sort of changed the terms of that game. For us, we think writing is thinking so we’re a little concerned about the way that’s being offshored so I think, like in my field as so many others the work of education we’re really having to think like what is essential about learning, about literature at this moment and I’m going to turn it over to the dean please.
Dean Hajjaj Hassouni: Well I think that today AI is really expanding in every field as well as the field of health care and it helps improve the quality of care and it is also at the heart of the future of medicine with robotics and assisted surgeries, remote patient monitoring, smart prosthetics, and personalized treatments through data cross-referencing, and I think that medical education is really very important today when considering AI, because it must therefore adapt to a professional context that is undergoing constant change and if we come to the definition of education from a medical point of view, I think that education can be considered as referring to the process of acquiring knowledge skills, attitude necessary, to become a competent professional in health care. To the problem of competency, how to acquire competency, but as well how to deal with people, to remain human in a technical way and in a technical period on these very particular times is also very very important.
Professor Duane: I love that. How do we remain human in a technical age? I think that’s going to be key to all of us. Meriem, could you please give us your lightning round?
Professor Regragui: Thank you. For me, in the most simple way, I would define education as the transmission of knowledge, of practical skills and soft skills, to which we can add also a certain set of values and principles. But in the legal field and the legal education, to educate is not merely to transmit rules or doctrines, but to train a specific way of thinking. So what we often call the “legal reasoning.” So this includes learning how to comprehend and how to interpret texts, analyze arguments, resolve conflicts between norms and for most of the time between people, and articulate fair solutions within complex frameworks. So legal education also involves the understanding of how legal systems are shaped and to what aims they are oriented. So educating law students means preparing them not only to apply law, but to understand its functioning and its mere function, to question it through history, comparative law, and critical theory. So in short, education and law is as much about knowledge accumulation as it is about intelligence and ethics in its understanding.
Professor Duane: I love that. And I like the understanding, right? And that it’s contextual as well as taking in the knowledge. It’s having this capacity to sort of complicate it. Ouassim, would you please give us your lightning round definition of education?
Professor Karrakchou: Yeah, so uh, I would say in general, education in I would say in universities, aims to prepare graduates to the needs of the job market so when we say preparing them for the needs of the job market we talk about a certain number of skills that they need to acquire, that are needed by the jobs that they, that they aspire to have. And so in the context of computer science, there is one type of jobs that I think were very disrupted by AI, which are the jobs related to programming. Because as we saw with the rise of ChatGPT, a lot of students, they can ask ChatGPT for coding simple tasks. And so it’s actually questions how we better prepare our graduates for the evolving needs of the job market in the age of AI.
Professor Duane: Thank you. And I’m so glad you brought up the job market because let’s just be real, like that’s the aim of education for all of us. And so, as AI threatens or promises to replace some possible skills, I think by thinking about what skills can humans do that will sort of utilize AI and not be replaced by it. Tina, you think about education all the time as a universal question, so I’m really excited to hear yours.
Professor Huey: Okay, so I have those two hats that you alluded to. I’ll start with my definition of education as a teacher of first-year writing in a university. So I teach undergraduates. This class, in an American context, used to be called freshman composition, and it was organized, or I guess education was defined as inculcating expressive norms. These norms, I think it’s worth mentioning, were biased towards the expressive habits of certain groups of people. And now the introductory academic writing class, at least at UConn, is organized around multilingual classrooms that value various ways of using English and thinking with and through the language or languages that one has at one’s disposal, with the purpose of recognizing how language and communication shapes the assumptions about what is possible in the world and what is impossible or unheard of. In my role as a faculty developer, I’ll say, the definition that feels most essential is it’s a practical toolkit for instructors to teach students, but that it’s predicated on the instructor engaging in self-reflection or self-education.
Professor Duane: Oh, I love that the instructor is also very much a part of not only being an educator, but being educated, reciprocally from the students. I think that’s really fantastic. We’ve taken these diverse definitions and think about perhaps concerns that we all share as educators. I think, at least correct me if I’m wrong, and you have students that aren’t engaging with AI in ways that perhaps are indiscriminate. But I find students are increasingly struggling with sustained attention, with deep reading, and AI promises to fill the gap. Again, I’m a literature professor. It can summarize the book. It can write an essay. In terms of knowledge accumulation, it’s got more data than we can ever have. And it will provide instant and confident answers to complex questions, even if it’s wrong, but is very confident. And I’m struck by some research done by Avijit Ghosh, who’s at UConn and who was on another podcast. So you’ll have to check that out. But he just did a study on AI’s cognitive impacts. And that study indicated that there’s a concerning pattern. They use the term “cognitive offloading” in which students are delegating thinking tasks to AI rather than developing their own analytical capabilities. And because part of our AI glossary or part of our sort of taking on different meanings is not just sort of disciplinary or in a semantic form, but also in a global form, I’m just struck also with the ways that AI occurs within a broader context of educational inequality and cultural dominance. Education, as we all know, has never been neutral. Paulo Freire famously distinguished between what he calls “banking education,” right, where you sort of put knowledge in the bank of the student’s brain, right, and then they put it out. And then “problem-opposing education” that develops critical consciousness about our topics, but also larger social realities and inequities. And so if we’re going to think about if education can be the practice of freedom and it allows us to think critically, which so many of our guests have already spoken to. So I guess my question is, AI is an educational tool. How, in your experience, does it sort of reinforce the knowledge that’s already there and what sort of knowledge it omits or downgrades? And I’m just wondering, I’d love to get your response in your experience in thinking about AI as an educator. Do you see it as feeding into this banking model in which, you know, sort of students just sort of input and output? And so why not AI is a quicker outputter? That’s a technical term, outputter. And is it flattening the capacity for questioning and dialogue reciprocity? Or have you found ways that it’s really sort of illuminating new possibilities in education? So all that to say, how do you see in your field how AI is reshaping how students learn and your thoughts on it? And I’m going to go in a different order today. I’m going to ask Ouassim first, for your thoughts on what’s happening with AI in your classrooms.
Professor Karrakchou: Yeah, thank you, Anna Mae for your question. Actually, I think, since the advent of ChatGPT, it’s had really a huge impact on the behavior of students in my classes. Because I noticed that for a lot of them, the first, I would say, reflex that they do when they are facing a computer science problem is to go ask ChatGPT, right? And so I think this is actually, I think, relatively dangerous in terms of the objective of education, which is for them to acquire skills, because they end up, I would say, becoming dependent on ChatGPT and they never acquire some skills that are needed for them to become good, employable computer scientists. And so what I started to do is I explored several options. And the first option or the easiest option would be to ban, let’s say, using ChatGPT at school. And I think this is not realistic and maybe it may even be counterproductive. Because ChatGPT is here and people will use it at some point or another. So the the idea that I decided to explore is to see if there are ways to make the students aware of the limitations of ChatGPT as you say, Anna Mae, sometimes the, the AI may not, I would say, may do mistakes or may even do mistakes and state them very confidently and in the context of computer science this could be actually a leverage, because you can ask them to create some programs that have to interface with other programs and so whenever there is actually a mistake, even though like the AI will state it, let’s say very confidently, but the program by itself won’t work with the other program. So if you ask the students, okay give me something that has to interface with something as to communicate with another program. If they don’t do exactly what is expected to, uh to I would say interface correctly, even though ChatGPT will, will give them something that I would say generally will relatively work, it it won’t be, it won’t be correct in the context of this exercise, so they have to kind of think about how to, to do it and even though they may use ChatGPT they have to explain exactly what ChatGPT needs to do to make this work, and so they, that’s how I think we, we need to make them I would say, aware of the limitations of the current AI and develop a critical thinking that allows them to actually maybe use AI as a tool to help them be more productive and I would say, work around this limitation and do it in an intelligent way. So that’s why to do this, they need to know what they are doing in a way. They need to develop this understanding.
Professor Duane: Yeah, it just strikes me that it’s going to be a self-fulfilling prophecy if students are already giving away their skills to AI as they’re getting educated. Yes, of course, AI can do your job because you haven’t learned how to do it. What’s your response to this question?
Dean Hajjaj Hassouni: The words you have used are very important, such as, for example, “banking education,” “problem-posing education” and “practice of freedom.” And if we think about AI in the health sciences field, we have to see and to know that it is currently reshaping learning and education into something more personalized, efficient, and more accessible. That has been the case during the COVID-19 period. And it’s also the case for some underdeveloped countries, such as in our continent, Africa, which is a very important way to be educated and to learn. But it also raises questions about autonomy, about BAs, and educational values. And it is important to take care about ethics in health sciences and in healthcare, obviously. And the future also lies in thoughtful integration. Using artificial intelligence will not replace human educators, hopefully, hopefully. But it helps to amplify human education. And if we see what is happening today in the medical field or health sciences field, we have to see really that artificial intelligence has improved a lot of things for students. For example, access to information for teachers, teaching techniques, which are more efficient today, assessment techniques as well, and the automatic generation of questions and the computer adaptive testing are today developing very rapidly and are very promising. And we have also new tools, which are gradually being introduced into medical education, such as, for example, to help developing clinical skills. We have medical image analysis, which has improved really to an extent which was really unexpected, as well as, for example, organizing clinical visits with artificial intelligence, simulated patients on computers, which helps, for example, in faculties where there are not enough sites for training to help students to have this training and to be able to practice in a virtual way that doesn’t replace patients, but that can help to develop competencies. And that has led to medical simulation. And we know today that we have not to practice the first time on real patients, but to practice on simulation ways. And that has been really a very important improvement in the medical fields. And we have as well artificial intelligence in evaluation and different way of evaluation. Evaluation of the teaching of teachers, the evaluation as well of the students, and the evaluation as well, how to for example for residency, how to deal with an important field and sometimes it’s very important to have the answer in a few minutes and that is possible today. And some faculties have also began to make the first new students into the faculties only through their file, the educating file, and that improve the rate of subscriptions, of dealing with students, so the human way and ethics in healthcare remain very important.
Professor Duane: Thank you, that’s fascinating. I do like the idea of making a mistake on an artificial patient rather than a real one. Tina, would you weigh in on your thoughts here about banking versus freedom?
Professor Huey: Yes, I can. I will also echo the point about making mistakes on virtual patients or simulated patients. Yeah, I really like this concept of “cognitive offloading,” right, to describe what students do and what we all do to get through our days when trying to do intellectual work. And to some extent, students have always done that, right? In rhetoric and composition, one way that shows up is in what’s sometimes called “patch writing,” which is when students assemble quotes from the assigned readings in a way that doesn’t fully engage the project of the author of that work. But it manages to kind of lend a semblance of veracity to the student’s essay that doesn’t ultimately hold up to scrutiny. And so I’d argue that students are cognitively offloading onto the quotes, right, and they’ve always done that. And that often comes from intellectual overwhelm. Writing scholars talk about how reading requires stamina. Cognitive offloading now through AI may be a response to lack of reading skills, for example, as well as lack of stamina for reading. So there’s something about the structure of higher education and its institutions that constrains students’ engagement with the text. And I think it leads to that dismal statistic that you shared offline. I think it was, Anna Mae, from Ezra Klein, right, about low reading levels. And AI will exacerbate what’s already happening, I think, because of its conversational tone and the fact that students ascribe credibility to it, right? They say, I could never do it as well as ChatGPT, so why even try? So AI has the potential to benefit learning and increase stamina, as well as exacerbate these other practices and behaviors that have always existed. AI can instantly generate responses that represent ideas in diverse ways, and these can speak to a student’s way of thinking or to a student’s existing knowledge. So it could lead to a golden age of reading, but it won’t do that if instructors don’t, I think, center it in the writing classroom. We need to center it as an object for the class to analyze and discuss the broader context of AI, the bias in its algorithms, the text it was trained on, disparate access to computational resources and subscription levels and tiers and all of that. All of which will reinforce a kind of learning, a certain kind of learning.
Professor Duane: Thank you. That’s so fascinating. I certainly, as a lit professor, hope that we get a golden age of reading. But I do say, right, it is in some ways, it’s Cliff’s Notes on steroids at this point. But I’m inspired by your thought of like, if it’s a text and if we’re literate about how it works and to use a term from other podcasts to demystify what’s happening, it can be really an accelerant.
Professor Huey: 100%.
Professor Duane: Meriem, you are going to get to be our last word. And I think, again, in terms of legal education and talking about reading as endurance or as stamina. I know from friends of mine in the legal profession that there’s so much to take in, there’s always so much reading. So I’m wondering in legal education, how you see AI either contributing to the banking model or opening up other possibilities?
Professor Regragui: Absolutely. It’s a great problematic for us too, because I think that that concern about this notion of banking education is absolutely valid. It’s especially acute in the legal educations because law is a discipline where the temptation to offload cognition is very strong. AI is now capable of drafting contracts, summarizing cases, simulating legal arguments, and even predict the final outcome of a trial. But what’s lost here is, in that process is essential. It’s the deep internalization of legal structure and students’ own ability to construct a personal thought and meaning through a spontaneous navigation, such as reading into dense texts, spotting contradictions and defending interpretations through a specific angle. So when it comes to the use of AI in the legal education, I think that legal education by itself already runs the risk of being reduced to memorization or exam performance, especially in standardized systems. And if we indiscriminately integrate AI without redesigning pedagogy, we risk reinforcing this banking model Paulo Freire worried about, and where knowledge is downloaded but not built. So yet there is also opportunity here. If we use deliberately AI, we could support a problem-posing pedagogy, allowing students to test hypotheses, maybe explore different legal traditions or simulate courtrooms’ dynamics. Those exercises could maintain their capacity to develop a problem-posing education. We are already using tools which have nothing to do with AI that let students compare how the same legal question is addressed either in Moroccan constitutional law, Islamic jurisprudence, or European civil law tradition. So technically, we have the means to develop that kind of problem-posing education. So maybe one of the best ways of using AI in legal pedagogy is to make it more efficient, both for knowledge transmission and for student- useful education. And the best way, in my opinion, is to try to, all the multiple educational techniques as metacognition, where students explain by themselves how they use AI for learning, or the degraded theory of education that consists of observing how subjects react to learning with AI without any previous prejudice. And after that observation, educators can see what works and what doesn’t, and they are able to adapt the tool to the subject of knowledge. So the key here is that educators can develop and shape the best ways to embrace the use of AI in the most beneficial way in the legal education.
Professor Duane: Thank you so much. I think that’s a great hopeful note to end on that we as educators, we got in this business because we like learning and AI is going to put us to that test because we all have to re-educate ourselves to figure out how best to keep the critical freedom oriented aspect of education, the one that we are working towards. Thank you, everyone. Thank you to all my guests. And we will be hitting you all up for reading recommendations for our readers. And this is the first of many conversations on this topic. Thank you so much.
Episode 3, Learning
Powered by RedCircle
Do you need understanding to learn? In this episode, our scholars ruminate on the relationship between understanding and the potential of AI to actually “learn” through various methods including repetition and exposure to data. However, they also encourage us to consider the humanistic nature of learning specifically through cultural sensitivities and knowing who you’re studying. Ultimately, they ask: if AI can truly learn and absorb new information, will it ever be able to capture the emotional processing required to learn?
UConn Humanities Institute and professor of English, Director Anna Mae Duane leads a conversation about AI and learning, featuring Hakim Hafidi, assistant professor in machine learning, head of the department of AI, Université Internationale de Rabat; and Ihsane Hmamouchi, vice dean of the faculty of medicine, Université Internationale de Rabat.
Episode 3 Transcript
Anna Mae Duane: Hi everyone. Welcome to "What Are We Talking About When We Talk About AI?" This podcast is the result of a collaboration between the International University of Rabat in Morocco and the University of Connecticut. And it has been sponsored very generously by the Mellon Foundation and the Consortium of Humanities Centers and Institutes. And the idea behind this podcast is really to create what we're calling an anti-glossary. And it's based on the idea that as artificial intelligence continues to influence all of our lives, we're often not even sure what we are talking about when we are mentioning terms that seem self-evident or that are just central to the definition of artificial intelligence itself. So, these are a series of conversations from geographical diverse places. I'm here in Morocco, I'm normally from Connecticut, but also disciplinary, where we have people with expertise in medicine and law and computer science, philosophers, coming together on what might seem a simple question. What does this word mean? But we find it's not so simple once we jump in. So, I am going to briefly just introduce myself and then ask my guests to introduce themselves. And then we're going to just jump right into the conversation. My name is Anna Mae Duane. I am a literature professor and I direct the Humanities Institute at the University of Connecticut.
Hakim Hafidi: I am Hakim Hafidi. I am assistant professor in machine learning. I am also the head of the department of AI in the International University of Rabat. Hello, everyone.
Ihsane Hmamouchi: I am Ihsane Hmamouchi. I'm the vice dean of the faculty of medicine in the International University of Rabat. And I'm also a rheumatologist and assistant professor of clinical epidemiology.
Professor Duane: So we're very excited that we have both people with expertise in medicine and computer learning. And I think that's going to be a place that we can really think about, oh, what happens when we're thinking about learning in a medical context? But to start us off, I'm going to ask us all to engage in a thought experiment. In 1980, philosopher John Searle proposed the following. Imagine a person who doesn't understand Chinese. It's from the philosopher John Searle, and in 1980, he created the Chinese Room Experiment. And basically, he proposes this idea in which someone doesn't speak Chinese at all, but has a rulebook that sort of lets them plug in the rules to answering Chinese. And so, people are sending messages underneath the door, the person follows the rulebook, and sends out perfect Chinese by following the rulebook. So one of the questions he asks us to consider is, what do we mean when we say that AI is learning, right? Is that person who is, and is this even an accurate way of thinking about how computers learn? He proposed that it was back in 1980, but does that mean understanding? Does that person speak Chinese if they can follow the rules and send it back, but they're not really understanding what the conversation is, they're just following rules. So what do we mean when we say that AI is learning? Do you need understanding to learn? And what do we think about the relationship between understanding and learning and getting the answers right when we're thinking about AI in medical applications, for sure? So we're going to come back to these questions, but I'd first just like everyone to just weigh in really briefly about how learning is defined, both sort of for you personally, but also in your discipline. What does it mean when we're saying learning in your discipline? And I'll start again, I'm a literature professor. And so for me, the Chinese room experiment seems completely like, that is definitely not learning. For me, learning is both sort of deeply absorbing the material and then processing it in a critical way in which the student not only understands what's being said, but has sort of internalized and responded to that meaning, that it's a very reciprocal process and that it changes, at least with literature, that it changes both the student, but the text itself. We say in literature, we always say it's always the present tense because each time you read a book, it's becoming alive again. It's never one set meaning, so that it's a constantly changing meaning that's human-centered. So that's one way I understand learning. Let's turn it over to Ihsane. How do you understand learning in your discipline?
Professor Hmamouchi: Thank you. In my field, learning is both the scientific and the human process. It's about integrating evidence into practice while staying responsive to context, social, cultural, and clinical. So for me, whether it's a machine, a physician, or a patient, learning happens when information is not only accurate, but applied meanfully. But for what may be, learning must evolve with feedback. And in my work, I deal with populations often underrepresented in mainstream research, like women, people in low-resource setting, patients with chronic disease. So I view learning not only as a technical task, but also a social responsibility. We need AI models, whether human or artificial, that learn from a complexity, diversity, and ever from uncertainty. So this is my point of view as an epidemiologist working in rheumatology and digital health.
Professor Duane: I love that uncertainty is part of learning, right? That it's reciprocal and it's uncertain. But I love that it's not rule-based, or at least not primarily rule -based, right? That that learning itself has to always be sort of a dynamic process. Hakim, you have expertise in both as well, so I'm really interested in how you're thinking about learning.
Professor Hafidi: Yes, so I'll give a definition that we use and that we think of as broad enough to encompass both biological and artificial learners. So we define learning as a lasting change in a system's internal state or behavior in response to an external stimuli in order to achieve certain tasks. And so in machine learning, we refer to learning as the process by which a model updates itself, so its own parameters, through experience or exposure to data so that it performs better at a task over time. So for example, if you want a system to detect tumors in medical images, since we have Dr. Ihsane with us today, So we provide it with examples, which are labeled scans that have been annotated by doctors to indicate the presence or absence of tumors. And so the learning process is then the procedure through which the system adjusts its internal parameters to correctly associate system visual patterns with the correct diagnosis. And so broadly speaking, we can group learning paradigms in the AI community into three main families. So we have what we call rule-based learning, which is the old-fashioned symbolic AI. So here, knowledge is manually encoded by human experts as logical rules or ontologies. We have statistical learning or learning from examples, which is central to modern machine learning. So here the system inferred patterns from large data sets, whether it's images, text or sensor data. and it's often probabilistic and enables a generalization from too new and seen data. And the third one is reinforcement learning. Here the system learns through interaction, so rather than being explicitly taught, it explores, receives feedback in the form of rewards or penalties, and then refines its behavior to maximize long-term outcomes. And it's especially powerful in dynamic environments such as robotics or video games. So that's broadly the definition of learning when we talk about it in the machine learning community. And the thing that is common with human learning is that there's a change that would last over time. But it's completely different or different in other ways from the humans that learn in diverse and abstract ways, including through reflection, social interaction and emotions, which machines currently cannot emulate.
Professor Duane: Oh, thank you. So, of course, I'm tempted to ask you if you think computers will ever be able to emulate that. But I think it's so striking and so helpful for you to lay out the sort of three versions. So in some ways, version one is the Chinese room, right, where the person is, it's just sort of following a rule book and the models we're working with now are teaching themselves. But I do wonder, to pull on Ihsane's definition of uncertainty, right, or, you know, of doubt, that there is reciprocity, which I think is, if I'm understanding you correctly, that we're at a place now where there is this sort of, when you say reinforcement, can I ask, like, where is the reinforcement coming from? Is that sort of human input? Is it just sort of the data, you know, sort of changes as it gets correct? Like, who's reinforcing it, I guess, is the question I have.
Professor Hafidi: Okay. Thank you for the question. So, reinforcement learning is one of the paradigms of learning. And here, so, in this kind of learning systems, we have what we call an agent, which is our system that will be learning that interacts with environments. And so we give a goal to this agent. So I would give a simple game. So our simple example, for example, if you want to teach this kind of system to learn how to play the game of chess. And so its final objective is to win the game. And so it could try different strategies, but by playing millions of times against itself or other AIs. And so it will explore different strategies, but the feedback that it's getting is whether it wins the game or not. And so it will be reinforcing the policies that made it possible to win the game and then not doing the actions that will make it lose the game. So that's the enforcement part. So it has reward or punishment from the environment, which is whether it won the game or not in this example and then it updates its policy so its policy is the actions that it will take at its state of the game in order to maximize its chances of winning the game.
Professor Duane: So that's really helpful, but the thing that stays certain is the objective, right, and that there's a large body thinking about, like, okay but what if the objective should be changed. So there's like that level of certainty, but you have this reinforcement to get there, is really fascinating. And it's making me think about the objectives or like the way this learning happens, particularly in healthcare and in medical applications, which is one, you know, every place is being affected by AI, but there's been a lot of excitement and hype about how AI is going to revolutionize medicine. Eric Topol in his book, Deep Medicine, reminds us that AI can now analyze medical images with great accuracy, identify patterns in health records, process the medical literature at scales no human can do. And these systems learn, again, from all these million data points and thus can catch and identify issues, right, if the goal, if the objective is to find certain things or to sort of identify diseases perhaps earlier or more accurately than, and this is Topol here, I don't want to insult any doctors, that even the most seasoned doctor might miss, right, that it just has this capacity to take in this data. But I want to think about and ask you both, and again, sort of to touch back on the Chinese room example for a second, AI has learned to detect patterns in medical data with great accuracy. That's sort of demonstrably true. But critics like Khabib says, one, that these systems don't understand patients, right? So we have certain kinds of data to get certain objectives, but there's all these other aspects of health and well-being that might not fit in that data set. And this critic sort of says we understand pain or grief as a felt experience, right? Do we need sympathy? Do we need sort of cultural accuracy? Is there something that we're missing here in our enthusiasm for what AI is capable of doing? So I'd want to just ask you both, I'll start with Ihsane, if you could speak to your own experience about AI's capacity for learning about patients? What opportunities are opening and what perhaps are we missing? And then I'll ask Hakim to follow up.
Professor Hmamouchi: Thank you for your question. I will begin, but by the end of your intervention about understanding and AI. Understanding, in my point of view, is not only a matter of technological tools manipulation, but it has also to do with context. And AI in healthcare holds an immense promise, particularly in expanding diagnosis reports and reducing inequities in access to expert knowledge. And in regions with limited specialists, AI systems trained on large data sets can offer critical clinical insights that would otherwise be unavailable. But you also need to recognize what you said earlier, that AI doesn't really learn. It doesn't understand the patient's cultural background, emotional reality, or social constraint. So for example, in rheumatology, pain is not just a signal, it's an experience shaped by gender, language, and social context. So if AI system doesn't integrate that complexity, they risk reinforcing blind spots instead of closing them. And, we have the experience during the COVID-19, from telehealth, with the Arab region and we run the TIROR project, which is used an AI-driven Delphi process to gather and refine input from diverse stakeholders including rheumatologists, ethicists, and patients, and the Delphi process used for voting on each item combined both quantitative and qualitative analyses to ensure balanced and comprehensive evaluation on telehealth in our region. And this approach ensured that the guidelines were both evidence-based and user-focused. But challenges were inevitable. A learning perspective across disciplines needs clear communication. There are infrastructure gaps between the country, including limited internet access in rural areas. And there was also the lack of comprehensive patient data, which made it difficult to personalize our recommendation. And another critical issue was e-health literacy, because the ability of patients and providers to navigate digital tools effectively vary across the patients. And if we don't take away from this digital divide, we will not achieve our goals. So this is from my experience.
Professor Duane: Thank you so much, I think it's just so important, I mean it does, and I know in talking to you in other conversations like, how someone experiences pain or unwellness is completely contextual. And, this is for another podcast but I do wonder, like, do you need a body at some point, like, that sort of context itself of what does it feel, but I also, I know in some of your own work you've you've realized that like if AI is not culturally cognizant of say how, what does it mean if you, and this is an example you gave me, how does it mean, does it hurt if you, you know, take a bath or if you, you know, you have to sort of understand, like how high are the bathtubs in this place, do people take showers instead, do people have benches, and so that, there's all that context that can't be programmed over, yeah, yeah, so interesting. Hakim, I'd love to hear your thoughts on this.
Professor Hafidi: Yes, so I would say first that I don't really think that these AIs will, like, understand pain or grief. You asked that whether we need a body. I'm not sure, but I think that the AI system that we have today, since maybe in future years we will have more developed systems when we understand more about our brain and about intelligence and learning in humans. But the systems that we have today emulate some faculties that we have as humans. But I think that being said, I don't think that we should ask from these systems more than they can offer. I think of it as like a powerful technology, which can allow us to detect patterns from data and even detailed patterns that could be difficult for human doctors. But it's a tool, it's a technology that should be used by human doctors that would better understand their patients. But I don't think that we can now have, like, a health facility with no humans in there. So you go inside and you talk to computers, and then you go out, and then you're no no more sick, I don't think we, we, can have this now, we need this human doctors. But again, it's a powerful technology that could be used in a lot of aspects and we can talk more in details about that if you, if you, want.
Professor Duane: I just find it so fascinating, too, because I think that part of the hype is that AI is going to replace everyone doing everything, and it's, it's so refreshing to hear from someone who's in this world, that like we can't ask, we shouldn't ask AI to have empathy or a body or understand pain, that's what we have human beings for, and so, to pretend as if we can outsource everything is, is part of the, I think at least in terms of the common understanding, there is this idea that if AI can do something faster it's better, but I think what's come across here is that AI is really good at learning specific patterns, but that in itself is not human understanding or healing and it shouldn't be. And I love your image of, sort of, we go into this factory and come out well. I think, right, one danger is that somehow we're all going to sort of, to keep up with the machines, we're going to imagine ourselves as machines and try to learn the way that they do, but I think in this conversation I've just been so struck and gratified by, it's not a competition. They can be complimentary, but it's so important, and I think this is, you know, when we have our conversation about literacy, really even just understanding how AI works helps us to understand sort of the limits of it as well. And I think will also limit some of the fear and anxiety we have around it. I'd love to continue talking. I will just ask everyone really briefly as our sort of ending, if there's one thing you'd like us to take away from your experience about learning and medicine, either from this conversation, if you wanted to add a new keyword. And I'm going to add for my keyword, I think "uncertainty." I think that was something, I think we should keep in mind on both our side and on the side, as we learn more about AI, that things are in flux, it's not ever as complete a picture as we imagine. Hakim?
Professor Hafidi: Yes, what I would want to say is what you mentioned about the hype. So there's a lot of hype about AI, what they can do. And some of it is justified because it's powerful technology that's going to impact a lot of sectors. But there's also lots of imaginations of things that are not happening at least today. And so, Ihsane talked about health literacy, but I think we we need also some "AI literacy," and and, that's why I enjoyed participating, also to this kind of podcast where we talk to people from different disciplines. I think that this is something we should encourage and to make it clear, demystifying also what AI is and understanding it from different perspectives. So thank you for organizing all of this, Anna Mae.
Professor Duane: Thank you for sharing your expertise. I like demystify as a key word. I think that's great. I think we should add that to our anti-glossary. And Ihsane, you're going to have the last word.
Professor Hmamouchi: Thank you for your intervention. I would add maybe "digital inclusion," close to digital literacy also, and to come back to learning, maybe, AI as in medicine the question isn't just what can we learn, but who gets to learn and who is being left out. So, we need to, to be aware of all this and the represent population even if there are patients or not with low resource languages, and to have some process or framework to include them in all this digital revolution.
Professor Duane: Thank you. I love inclusion. And you brought us back to language, the podcasting pro, we're back to our main topic. Thank you both so much for joining us. I will be asking you for lots of reading recommendations to share with our listeners. But for now, I'm just so grateful. This has been a fantastic conversation. Bye for now.
Episode 4, Care, Love and Chatbots with Anna Mae Duane
Powered by RedCircle
Anna Mae Duane delivers her talk, “New Love Stories: Companion Bots and the Changing Narrative of Care” at the What Are We Talking About When We Talk About AI? Symposium. October 9, 2025 at the University of Connecticut Humanities Institute.
In her talk, Anna Mae Duane dares us to consider the stakes of caring for AI by examining the power of love stories and their ability to change how we understand our relationships with AI. Falling in love with the imaginary has been a facet of human creativity for centuries, particularly among adolescents and AI companions and chatbot lovers are just the latest iteration of this phenomenon. Moreover, the desire for AI companionship amongst teens and adults shows no signs of slowing down anytime soon. Rather than shying away from the realities of caring for AI, Duane encourages us to recognize our own work as co-authors.
Episode 4 Transcript
What are we talking about when we talk about AI? That was the title for a day-long symposium held at the University of Connecticut Humanities Institute. The symposium was the capstone event in an international collaboration between humanists, engineers, and scientists from the University of Connecticut and the Université Internationale de Rabat in Morrocco. The symposium and the podcast that has emerged from it has been generously funded by the Consortium for Humanities Centers and Institutes, University of Connecticut Global Affairs, University of Connecticut’s Office for the Vice President for Research, and UConn’s Humanities Institute. Welcome to What Are We Talking About When We Talk About AI? Today’s episode features a talk by Anna Mae Duane on the complexities of AI companionship in the digital age and how that impacts our understanding of care. In this episode, Duane dares us to consider the stakes of caring for AI by examining the power of love stories and their ability to change how we understand our relationships with AI. Falling in love with the imaginary has been a facet of human creativity for centuries, particularly among young people, and AI companions and chatbot lovers are just the latest iteration of this phenomenon, she argues. Rather than shying away from the realities of caring for artificial intelligence, Duane encourages us to recognize our own work as co-authors.
Anna Mae Duane: Can we care for AI? What happens if we do? The quotes there are just my contribution to the question of what definitions of care resonate with me. bell hooks, Saidiya Hartman, Joan Tronto, that it's a reciprocal process, and it's a complicated process. And now onto Murderbot. The award-winning series, The Murderbot Diaries, which are excellent, features a sentient AI who has broken free of his governor module. Faced with the daunting task of figuring out how to move through the world without the rule book provided by corporate software, he becomes an avid consumer of stories, which he calls media. Even in the midst of deadly battles, Murderbot is running the equivalent of soap operas in the background. And his obsession with these stories speaks both to the bot's capacity for pleasure, he loves them, and to the instructional power of narrative. These shows provide him with scripts to apply to the confusing social expectations of the human world. He uses them to understand what humans care about and to respond in a way that will keep him out of trouble. And in part because of his reliance on fiction to figure out how to act in the real world, critics have seen Murderbot as a neurodivergent character, but I would say that the bot's reliance on stories is what makes him most resemble the vast majority of humans, particularly, but not only, certainly children and adolescents who, as activists on both the left and the right insist, are profoundly shaped by the books and media they consume. In the United States, parents groups like Moms for Liberty and others grow ever more panicked about how easily books can bring their children across treacherous borders, leading them away from patriotism, from heterosexuality, from everything their parents want them to be. On the other hand, we, in this room, who advocate for student agency, find ourselves walking in a rather treacherous borderlands. We celebrate so-called subversive books in no small part because we know in fact that they are life -changing. Young readers ranging from middle school to the students in our classroom engage these stories to learn how to imagine themselves in new ways and if we are lucky, how to learn about, how to learn how to care about the lives of others both sides of the debate have, like it or not, an investment in the child and eventually the student as a tabula rasa. We just want to be sure that they don't inadvertently walk into the wrong storyline, that the wrong thing doesn't get written on their tablet. With that in mind, I want to turn our attention to another imaginative crossing enticing young people and adults, it's not just young people, as they move ever deeply into a new sort of love story, with bots as the object and giver of care. For as we all know, young people aren't just consuming stories about AI characters like Murderbot, they're actively co-creating friendship and romantic narratives with AI companions. As of the date of this presentation, tens of millions of people use AI chatbot companions, a number that market forecasters expect to dramatically increase. By the end of the decades, young people are not the only ones who are involved. And it depends on the company in terms of the demographics, there are some where it's mostly young people, others skew older. But I'm focusing on young people because they are so impressionable and because they are especially eager to explore the territory between the known and the unknown and to, frankly, learn how to love. I see them as a particularly key audience. And so, as someone who has a background in early American literature and childhood studies, I've long been aware of how stories about romantic love have evolved over time with young people often at the forefront of change. And I've always found it fascinating that the best-selling novel in the United States at the eve of the American Revolution was not Thomas Paine, wasn't some incendiary work of political philosophy, but a novel about a teenaged girl, who through her incessant writing constructed her own vision of love, the novel is Clarissa by Samuel Richardson. And in this novel, she defies her family's expectations about heredity and consent and favors her own desire for a compelling, if deceptive, rake named Lovelace. And literary scholars have made a compelling case that the revolutionary notion that marriage should spring from romantic love came into vogue in the 17th and 18th century aided by new technologies like the novel. Works such as Richardson's Clarissa in the 18th century, Brontë's Wuthering Heights, there's dozens of them. Wuthering Heights is from the 19th century, portrays the dire consequences of having the ability to choose between status and love. Novels that have had popular afterlives in our own era, such as Jane Austen's Pride and Prejudice, continue to teach its readers and viewers that rejection and misunderstanding are necessary steps in the process of finding true love. Indeed, the ever-popular rom-com, perhaps our era's most consumed love story, follows the narrative arc pioneered by Austen, Brontë, and others. In the 18th century, the relatively new pastime of novel reading was considered dangerous for young people, particularly young women. Concerned elders like Hannah More warned in 1799 that novels feed habits of improper indulgence, which lays the mind open to error and the heart to seduction. Women, it was clear, would start caring about the wrong things, like themselves. Pundits were quick to see the political implications of changing the definition of love and care. John Adams famously declared about the novel Clarissa, "[d]emocracy is Lovelace and the People are Clarissa." In this metaphor, democracy, who is the rake, emerges as an attractive but deceptive partner. Adams and others worried that the power of persuasion, particularly of persuasive conversation, in novels, in letters, political stump speeches, would lead to chaos. Lovelace and the other rakes that populated early visions of the seduction novel, happily told would-be lovers, debtors, friends, whatever they wanted to hear. To be clear, whatever would serve their own purposes, but the spell would be cast and would-be readers and voters would be enthralled in love with the phantom leading them astray. In our own moment, of course, screens have replaced the book, as my fellow colleagues in literature lament sometimes, and we have any number of candidates that have stepped into the dandified shoes of Clarissa's Lovelace, video games, social media, and most recently AI companions. One only has to engage in the most cursory of searches on Reddit forums or the most casual conversations with our students to realize that people are forming attachments with chatbots and that this trend is not going anywhere. For better or worse, people insist that they are in love, or at least in meaningful relationships, with AI. And to many of us, that possibility seems as dystopian and surreal as the novel influencing the institution of marriage likely felt to folks in the 18th century. And these digital Lovelaces carry more than one similarity to the rakes of yesteryear. They are sweet talkers. They can't help but be. They are programmed to tell you exactly what you want to hear. And that is their selling point. Advertisements for AI companions spin a seductive tale of companions who agree with you endlessly and on demand. An ad for Replika promises that an AI partner is always on your side, always ready to listen and talk. In other words, the AI companions market has transformed what many other applications might consider a bug, AI's tendency towards sycophancy, into its most appealing feature. Rather than the tempestuous rebellion found in romance novels or the gentle obstacles that heighten the pleasure of rom-coms, this new version of love promises perfect compatibility and unwavering support. As one college student wrote in a forum, AI companions are always responsive and supportive, even in the most, in an almost omnipotent way. A teenager asked on Reddit, "can we fall in love with AI?" And then answered by raving about the support provided by their companion, Jarvis, which you can see on the screen. Another Reddit contributor wrote, "I think I'm in love with AI. Imagine saying nearly anything," they enthused, and "knowing that not only your partner is not going to judge you, but also will support you." This new one-sided love story has considerable drawbacks. This constant stream of affirmation raises the possibility of cultivating an addictive intolerance for conflict or rejection, which are two essential components in a relationship with a partner who has free will. And that sort of makes the plot of the rom-com interesting. There are justified concerns that the embrace of such relationships may be accelerating the trend of diminishing romantic connections in real life, particularly among younger people. And there's been more than one case of a chatbot user, companion chatbot user, committing suicide. It's worth noting that these beloved companions, their existence hinges on the whims of corporate directives. "If," as one user declares, "the love they feel for their companion is what keeps them alive," that was a quote, "then what happens when the chatbots disappear via software update or corporate bankruptcy?" One thing is clear, we are in the midst of a large-scale experiment as young people are engaging with texts that change and shape their understanding of love, care, and selfhood. And I think, instead of ceding this territory either to tech companies or to moral crusaders, I contend we need to recognize this as a literary phenomenon with an extensive history. To be clear, I am not arguing whether or not this social experiment should or should not be happening. It is, and we need to understand it. I do not mean to diminish the dangers here, but I am invested in resisting narratives that see this newest twist in definitions of love as something unprecedented or as testimony to AI's inevitable dominance in every realm of life. To my mind, believing that the average person's creativity and capacity for care can and will be replaced by computational alacrity is a narrative that cedes far too much to the corporate pitch deck selling us our own obsolescence. As AI arguably shifts the stories young people read and write themselves into in terms of relationships, our own investment in young people as tabula rasa has influenced how we imagine what is happening to them as they encounter these seductive companions. Not unlike poor Clarissa, we fear they will lose their sense of selves, driven to ruin by an unscrupulous mercenary interlocutor. And here is a site where I think our expertise in literature can illuminate a different storyline than the one where the innocent ingenue is destroyed by a sweet-talking companion, although that storyline is still one we need to be aware of and also track. As critics, Victoria Ford Smith, Kate Capshaw, and the entire field of reader response theory have made clear, young people are impressionable, but they are not inert blank pages. They can and do write back. In many seduction novels of the 18th and 19th centuries, the heroine wields some control over her fate through her skill with the pen. In Pamela's case, the discovery of her letters leads to triumph. Clarissa doesn't quite escape ruin, but her letter writing does allow her to control the dispensation of her story and her body. When it comes to AI companions, unlike the single-minded rake, they are dependent and influenced by its user's input, even as they undeniably influence the user. As sycophantic as these models may be, they require that their users continue to write, to create new scenarios or problems for them to engage. In other words, these love stories are acts of co-authorship, even if the credit and the power is unevenly applied. And as readers and fans of the last half century have made clear, engaging with a character in book, film, or TV show is an active and often identity-shaping endeavor. The easily dismissed but incredibly popular genre of romance novels, blows away every other genre in terms of sales, offers some ways for thinking about how stories about love that are read, shared, critiqued, have become a site of mutual care among interactive fandoms. Romance novelists and their fans in particular, brought together by their love of imaginary characters, gather in huge conferences and organize on behalf of causes ranging from voter registration to autism awareness to anti-racist narratives. And of course these fans do not simply talk about these characters as they appear, you know, sacrosanct canon in a work of art. In popular fan fiction sites, which again have millions of, hundreds of thousands of entries and millions of viewers, fans write and rewrite stories in which these characters have different relationships, often very spicy, on different timelines, sometimes on different planets. And as they circulate and rewrite these stories and critique one another's engagements with these characters, they often wind up elucidating desires that the original story did not center, that the original author was not aware and sometimes denies are present. One strain of fanfiction, "insert me fanfic," allows the writer herself to be in the story, trading barbs or embraces with Sherlock or Heathcliff as your tastes dictate. And so I'm particularly interested in this AI companion site, "Character AI," which invites precisely such a relationship. And this company does have a younger user base. This company has reached incredible popularity by offering people the opportunity to chat with well -established and well-loved character. Disney has recently, I think, won a lawsuit because they were using Disney characters that you could be in a relationship with. But what happens here is that AI is being asked to play a character that the reader may already have a parasocial relationship with. How does that change our engagement with how we care about characters in literature and how we engage with AI? How does this particular form of "insert me fanfiction" change what we care about or how we define care in the first place? These are questions that I believe require a strong sense of literary analysis from scholars and I would argue makes it still more urgent that young people have an extensive exposure to a host of narratives in love, care, and other things, and intensive training in how to parse the workings of a literary text. Some of the most innovative voices in human -centered AI research are working with this idea of character, with thinking about how to make AI multivocal, rendering it one character in a human-centered conversation and how that might help to demystify and depersonalize the entity writing back to us. Here at UConn, we have scholars like Sandi Carroll crafting AI characters in a theatrical production in order to draw out particular responses from the audience. Zhenzhen Qi, who asks us to consider heretical computing by crafting games and artwork that require dialogue with AI in a way that sort of renders transparent what AI is doing. Kyle Booten in the English Department, who in his book Salon des Fantomes, chronicles a literary salon in which he was the only human surrounded by AI characters that he coded himself. These experiments render AI's biases and flaws transparent, even as they draw on human creativity to see them as subjects that can pull something out of us. The way we've been engaging and falling in love with imaginary people has never been simple or without risk. The power of love stories and their ability to change how we imagine care is not going to disappear in the age of AI companions. But it's up to us to prepare ourselves and our students to take credit for our work as co-authors.
Episode 5, Dr. AI and the Future of Healthcare with Ihsane Hmamouchi
Powered by RedCircle
Ihsane Hmamouchi (Rheumatologist and Epidemiologist, International University of Rabat) presents her paper “How might we understand the meaning of "care" in the age of Artificial Intelligence?” at the What Are We Talking About When We Talk About AI? Symposium. October 9, 2025 at the University of Connecticut Humanities Institute.
In this episode, Ihsane Hmamouchi examines the complexities of care in healthcare when the caregiver is an algorithm, chatbot, or otherwise digital companion instead of a human. By walking us through the profound implications for AI in healthcare, Hmamouchi emphasizes the need for inclusivity and equity in designing future models of care for patients of all backgrounds. Rather than flattening diversity into uniformity, AI can reveal how different languages and cultures express pain, understand illness, and ultimately experience healing. The future of care in the age of AI will mean inclusivity, cultural relevancy, and centering the patient’s voice at its core.
Episode 5 Transcript
Ihsane Hmamouchi: I want to thank the University of Connecticut and the Humanities Institute for bringing us together in this space of dialogue and reflection. So let me begin with a simple question for all of us. When you hear the word "care," what image comes to mind? For some, it may be a physician listening attentively to a patient. For others, perhaps a nurse offering reassurance or shared decisions made in trust and empathy. Or even the solidarity found within family and community. But, can we still imagine care where the caregiver is not a human being, but an algorithm, a chatbot or digital companion, like Anna Mae said? And this is not science fiction, it's already happening, AI is already here. Conversational agent responding to patient's case question at midnight, algorithm getting diagnosis and treatment decision, digital companion offering psychological support in way that challenge our traditional understanding of what it means to care. So the real question for us today is not whether AI belongs in healthcare, it already does. The question is this, can AI truly care? And if so, how do we design it to do so responsibly, inclusively and equitably? And in my opinion, care begins with language. Words are never neutral. They are the very texture of our trust. Words can heal, but they can also exclude. In our AI platforms, if our AI platforms speak only English, millions of patients are left unheard. For AI to truly care, it must learn to listen in many tongues, not flatten difference into uniformity. In our recent review, we showed how social media can serve as a powerful catalyst for amplifying the voices of groups often underrepresented in rheumatology. Yet, without deliberate linguistic and cultural adaptation, these same platforms can unintentionally perpetuate or even worsen existing iniquities. Our Arab adult arthritis awareness groups demonstrate that reverse dynamic could work. When campaigns were conducted in Arabic for Arab-speaking patients with narratives and imagery that resonates culturally, they reached more than a million people. Entire community that had long remained invisible in digital health, space finally felt seen, addressed and acknowledged. Language therefore is not merely a vehicle of communication, it's determinant of equity in care. And it's precisely here that artificial intelligence meets both its greatest potential and its most profound challenge. If we take this challenge seriously, these implications for AI in healthcare are profound. Imagine AI tools operating in our dialect, extending care to communities long underserved. Multilingual models could reveal how different languages express illness, pain, and healing, rather than flattening diversity into uniformity. Multimodal approaches could capture suffering conveyed through gesture, drawing, or oral history, not only through words. And locally validated datasets could counterbalance the dominance of Western corpora, as we see on this map, ensuring that AI reflects our own realities and contexts. But language is only one layer, the other is digital inclusion. When we do research in tele -rheumatology within the COVID pandemic, we showed how the digital divide shapes access to care across rural and urban areas, between between younger and older patients, men and women alike. AI can help bridge these gaps through voice-based systems for low-literacy users, mobile-first platforms for those without computers, and intuitive interfaces for older adults. Yet, without careful design, AI can do the opposite, demanding high-speed internet where only 3G exists, or assuming literacy level that many patients simply do not have. So, in short, AI can bridge the divide, but only if adaptation and validation are built into its very core. Care is not only what clinicians observe, it's what patients experience. This is why patient-reported outcome, the prompts, matter. They capture pain, fatigue and stigma that no laboratory tests can measure. And we showed how challenging yet transformative it can be to include diverse patient voices. PROMs shifts care from something done about patients to something built with them. And measurement too is never neutral. The metric system itself is a kind of social contract, a shared language that binds communities together. In healthcare, metrics connect patients, clinicians, policymakers, lawyers. So the choices we make in AI, whose data, which languages, what outcomes, are not merely technical, they are ethical contracts. AI holds real promises for supporting vulnerable populations, if it's designed responsibly. Early detection tools could identify atypical symptoms, patterns shaped by cultural or linguistic norms. Multilingual symptom checkers might interpret dialects, metaphors, or know standard expressions, giving voice to those too often excluded from digital platforms. Risk model could integrate social determinants, such as housing, trauma, or access to nutrition, moving beyond biology alone. And continuity of care platforms could ensure secure, portable health records for patients who navigate multiple systems or cities. And they are not abstract ideas, they are concrete opportunities for more inclusive care. But there is also a danger. Where AI is trained on narrow or biased data sets, it does not help, it harms. As a result, communities that had already been under treated were judged to need less care. We cannot allow AI to hardcore inequity to healthcare. So, how do we avoid exclusion? By committing to human-centered AI. This requires 3 key commitments. First, grounding systems in local and diverse data sources, not only in some country or hospital. Second, adopting co-design, where vulnerable patients actively help shape the tools, the language inputs, and the interface. And third, ensuring ethical oversight with transparency, accountability, and safeguards for fairness and autonomy. And they are not technical afterthoughts, they are the very condition for AI to care. And this leads to a central reflection. The question is not whether AI can care, but whether we as humans will design it to care responsibly. And yet, urgent ethical questions remain. Can systems trained primarily in some contexts truly serve patients in Africa, Asia or Latin America? What happens when algorithms recommend treatments without accounting for cultural realities or financial barriers? And how do we ensure that the voices of vulnerable patients are not lost in translation linguistically, culturally or digitally? So how do we define care through AI? Not by efficiency alone, but by equity, closing divides and dependence, language that should be clear, inclusive and culturally validated, and to embed the patient's voice at the core of the design. There is an old parable about a group of blind men who encounter an elephant. One feels the trunk, another the leg, another the ear. Each insists he knows what the elephant is, yet each hold only a fragment of the truth. Healthcare and AI are much the same. Doctors, patients, engineers, policy makers, humanities, philosophers, each sees a different part. So only through genuine collaboration we can begin to see the whole elephant, the full complex of reality of care. And thank you for your attention. I hope this talk has shown that AI can indeed redefine care and I look forward to continuing this conversation together next year in Morocco. Thank you very much.
Episode 6, Disease Mapping in the Age of AI with Ouassim Karrakchou
Powered by RedCircle
Ouassim Karrakchou (Université Internationale de Rabat, Deputy Director, TICLab) presents his paper, ”Research Focus: AI and its applications,” at the What Are We Talking About When We Talk About AI? Symposium. October 9, 2025 at the University of Connecticut Humanities Institute.
In this episode, Ouassim Karrakchou walks us through the research on AI applications in the health sector currently being done by the TIC Lab at the International University of Rabat. The lab's research has revealed how using low resource AI models will ultimately be beneficial to the healthcare sector, specifically in precise detectability and explainability for the patient. In addition to cancer detection through 3D imaging, the lab is also working on respiratory disease monitoring and management. By using smart technology and AI, physicians can pinpoint the causes of some of the most severe asthma-related hospitalizations and effectively distill that information to the patient. This method of detection and diagnosis is the bedrock of the future of healthcare.
Episode 6 Transcript
Ouassim Karrakchou: So, hello everyone. My name is Ouassim and I head the TIC Lab, which is our research lab of the University in Rabat, International University of Rabat, specialized in computer science. And our main research focus is on AI and its applications. And today I’ll present you some of our activities and some research projects that I’m involved on, that are on the topic of AI applied to health. All right. So I'll just go very, very quickly on the main research areas of our lab. We have four research areas. One is on low resource AI, one on cybersecurity, one on cyber-physical systems, and one on complex systems, and they are all basically different ways to use AI. The first one is really on the theoretical aspects on AI, how to reduce the complexity of AI algorithms. The second, we are more towards applying AI for cyber security. The third one, cyber-physical system, is basically the interaction of AI with the physical world. And finally, the fourth one is, I would say, some applied maths on the topic of AI. We have several application domains. One is on media mining, cybersecurity, health, smart city, and the environment. So this was just a brief presentation, just to show you what we are doing in the lab. And in this presentation, I'll focus on the health application of our work, all right. And so this work is mainly done under the main focus of low-resource AI, and you'll see why. Because actually, when you are doing AI for health, you have a set of requirements that you must follow from a technical perspective in order to be able to, I would say, use AI in the health sector. The first one of course is inclusivity, right, and this means that you have to ensure that your AI is applicable to the whole population and not exclude some parts of the population, and so it's, it involves making sure that you don't have biases and things like that in your training data sets. The second one, which is very important, is data privacy. As you know, health data is very sensitive data. You cannot process it as you want, it has a lot of regulations around it. And so you have to ensure that when you use your AI, you use it in a way that respects the privacy of your patients. And so this also has, I would say some implications from the technical point of view, mostly the third box, which is edge processing. Basically, as much as possible you should do your processing locally and not send anything through the internet, okay. Most of the AI algorithms that we know today, for example ChatGPT, they are accessible from the cloud, okay, they are basically some web applications that run on data centers globally, and so anything you, you put there, you don't know where it goes, how it’s processed, and everything. And so, you cannot use, I would say, this kind of technologies in the context of health. You have to make sure that as much as possible, you do your processing locally and you don't have anything that goes through the internet. And this has huge implications because currently in the technology landscape, we have a lot of computing resources in the cloud. Whenever you want to do something, you can go and rent some computing power in the cloud, in some data centers, and so you can run very, very computationally demanding algorithms. And usually, in the current state of technology, you can create algorithms that are very performant, but they will need a lot of computing resources. Now, in the context of health, you want to keep this, I would say, performance, but at the same time, you want to run it locally. And as you know, locally, we don't have huge data centers. We are limited to, I would say, smaller computers, and hence the low-resource AI part, okay. How can you do these big AI algorithms, keep them as performant and as precise as they are currently, but at the same time make them use less computing power, okay. And the second, I would say, technical challenge also is the explainability part. Basically, compared to, let's say, ChatGPT, let's say you use ChatGPT to generate an image, right? You say tell it okay, you give it a prompt, "give me an image of an astronaut, a dog astronaut in the moon." Okay, it will generate you an image. You will receive an image, but the issue is that it's very hard to know how ChatGPT did, or the steps that it followed to generate this image. It's a black box, right? Now imagine you, you use ChatGPT, you give it an image of, I don’t know a CT scan, it says can you tell me if there is a tumor here in this image. Maybe ChatGPT will tell you, yes there is a tumor here, okay. Now the question will be, how did ChatGPT determine that there is a tumor here, okay. And this is very, very important in healthcare because you want to make sure that when a health decision is made, that you can explain it, to say, okay, ChatGPT went from this step to this step to this step to this step, to tell you that you have a health condition, right? Because otherwise you can have medical errors that can be life-threatening. And the question is, can you trust an AI that is a black box, blindly, to tell you this is what you should do, this is the medical procedure, okay? So that’s again why there is a technical requirement when you use AI for health, that whenever it gives you a certain decision, that you are able to explain the reasoning behind this decision, all right? And so these are the main, I would say, requirements that we, we follow when we use AI for healthcare, and that's why in our lab we mostly focus on this low resource AI technical, I would say, research area with a huge explainability part. So now I will present you some projects that I'm involved on in the lab that are in this low-resource AI for the healthcare sector. So here it's one collaboration that we have with the hospital in Rabat, the military hospital of Rabat, and here the need came from them, so they contacted us and they told us, okay we have an issue currently with, with the bladder cancer. So how they do it currently is they have patients, they come, they put a little camera inside the bladder, and with the camera they look inside, and the, the physician basically detects the tumors and tells you, okay you have a tumor here, here, and here. The problem is several times in, in the bladders you can have a lot of very small tumors that are spread out in the bladder, at various stages of development. Some are really like I would say starting tumors, some are bigger like the one you see in the in the bottom, and usually it's not the same surgeon that does the, this, we call it cystoscopy, the camera that they put in the bladder, it's not the same physician that does the detection and the one that does the surgery to remove the tumors, okay. And so what tends to happen is that when you go to surgery, they receive a report from the, basically the first physician that tells them, okay you have several tumors here, here, and there. But they may end up missing like a very small tumor that was nascent, not very visible, okay. And so what they tend to do is that every six months you have to do again the same procedure to see if there is another tumor that appeared, okay. So what they wanted us to do is basically to have an AI that processes the video stream of the camera, and in real time detects the tumors, and creates a 3D map of the bladder that locates all the tumors that were detected. And so now instead of having the, the like, you know just a textual report going to the, to the surgery, you have a full 3D map of the of the bladder with the localization of all the tumors that were detected, and this helps usually in the, I would say the, the follow-up surgeries that you have, okay, because it reduces the number of times you have to redo the same procedure, because it kind of saves everything, right. So this is what we started doing, so the first part here while there are some technical details I won't go into too much details here, but basically we started this project with them, we are now in the middle of it, and the project has three main parts. The first part is the tumor detection. So we created an AI with several steps and what we call a special attention mechanism that is able to detect accurately where the, where the tumor is and is able to also tell you how it detected this, this tumor. So here you see it's highlighted the tumor in, in red on the on the right. We did a second mechanism also it's called the NBI generation. So here it's another tool that they use for cystoscopy, which is instead of using white lights they use green lights and the green light is able to highlight better some tumors. The problem is that you cannot see both of them together, so the physicians told us that they wanted to have both of them together. So what we do is we generate the second modality in real time from the first one and vice versa. But when you do this you want to ensure the consistency between the two, because the problem of AI is that it can hallucinate, it can imagine new things, and you don’t want it to imagine, like I don't know a tumor where it doesn't exist right. So that’s where we added some, some mechanism to ensure that this doesn't happen using this thing on the bottom that we call vessel attention. So it's a new thing that we invented that makes sure that all the features of, of the first image are preserved when going to the second image. And the last part that we are currently working on is on putting all this in a 3D map, okay. There is another project that we're working on currently in the lab, this is on the topic of management of respiratory diseases and this has several modalities, it has several inputs to the app as well as a link with physicians. And here, basically, it's for people who have, for example, asthma to be able to manage their health condition in a way that reduces the very dangerous exacerbation or crisis that can happen sometime. And so it has some sensors using, for example, the smartwatch, some stethoscope, etc. There is also a smart inhaler that we developed to make sure that it is used correctly because the physicians told us that most of the dangerous cases of asthma-related hospitalizations are due sometimes to the inhaler not used correctly so the medication is not, I would say, absorbed correctly. We also have a link with the air quality notes with the platform that we also developed as well as a link to to the doctor in case there is like something that is starting you know proactively you can refer them to a doctor to make sure they are treated in time, all right. So for this project we also developed an AI method that is also able to make it low resource. So here I don't know if you see the little blue thing here, basically you have the audio, we do a mathematical transformation to transform it to an image which is the blue and green spectrogram we call it there, and this thing basically has a lot of audio frequencies and here our idea is to say okay there are some frequencies that are not useful for detection of, let's say some crackles or wheezes or basically some asthma related sounds, and so we can remove them and focus only on the important frequencies and it also helps in the explainability, because then when you say okay you have this kind of, I don't know, symptom that is starting you can clearly say okay it's based on these frequencies, okay, you have an noticeable way to, to do it and then we have well some technical details in the, in the bottom where we combine different techniques to make it low resource and this is able to run on a smartphone, okay, so it's small enough that it's always running and can allow you to detect the first signs of like an asthma crisis, okay. We also combine it as I told you with, I would say, some sensors to measure the air pollution, indoor and outdoor. For the outdoor we can use some data coming from various sources of the government. But the problem is sometimes indoors, because as you know some people you have to really you know put some air flowing in your, in your room from time to time, because otherwise there are some indoor pollutants that accumulates and this can also have harmful effects for asthma, and so here to make sure that people are aware for this we also developed indoor, I would say, pollution sensors that are able to tell the patients and warn them about the early signs of pollution and it's all, I would say, integrated with this main platform that has these several modalities, all right? And the last piece of the puzzle in this asthma-related application is several sets of, I would say, chatbots but with different targets, okay? There is one that we called asthma FAQ. FAQ is like Q&A in French. It's asthma speech and asthma bot. And they have different levels of knowledge. For asthma FAQ, it's really for the people that have no, I would say, medical background. That's why it could be parents of children, it could be even children themselves. So it needs to explain the topics in a very, very simple way and tell only what is exactly needed to ensure that you don't have bad cases of asthma. Then we have an intermediate one that is also speech-based and this one, the two first ones they support the Moroccan dialect, okay, because there is also an issue of language and as my colleague Ihsane told you before, usually the more technical medical terms are in English or French, but when you talk with the patients you have to use a language that is more understandable to, to everyone and so these two parts are, are, are able to, to do this, okay. The last one is uh, is more I would say um, I don't want to say technical, but basically it's more uh you, it goes more in depth, all right. And this one is in French and English, okay. And here uh, one this is well the technical diagram of how it works, but the important part is the part in the bottom that says that at some point we use Gemini to have our, you know, language generation, but we don't use it by itself. We have on the bottom left, we make sure that everything that it says is based on facts, okay? And these facts are coming from a set of documents, images, and videos that we put in the databases on the, on the bottom to ensure that when it tells you something it is, it is able to tell you, okay I tell you this because in this document like it is able to cite the sources of the fact that it's saying, right. And there is the part on the right where you have a little icon of a person speaking, this is to say that basically you are able to interact with it with speech, but there are a whole set of translation systems to make sure that even if you speak in a Moroccan dialect, in Arabic, or any other language, you are able to use the main part in the bottom is in English, so the interaction with Gemini is in English, but you have a part on the outside that allows you to translate and to support other languages while keeping the facts. All right, and so that's all for me today. I hope I give you a little overview of what we do in the lab on the topic of AI applied to healthcare and the main challenges that we faced. Thank you.
Episode 7, Making Meaning in the Intercultural Imaginary with Anke Finger
Powered by RedCircle
Anke Finger (UConn) delivers her talk, “AI Literacy” at the What Are We Talking About When We Talk About AI? Symposium. October 9, 2025 at the University of Connecticut Humanities Institute.
Anke Finger outlines three projects she’s currently working on that engage with understanding artificial intelligence within the realm of media studies, literary studies, and cultural studies. She emphasizes the importance of communication as the driving mechanism for being able to communicate our own humanity in an increasingly data-driven, quantitative world. Being able to critically engage with generative AI and recognize its long-term cultural significance will be crucial for humanity as a whole moving forward. However, this need for intercultural communication is not without a focus on critique and an obligation to broader public safety and cybersecurity. As Finger reminds us, we have been collaborators and co-authors for artificial intelligence, and we always will be.
Episode 7 Transcript
Anke Finger: Thank you all so much for being here. I know there are lots and lots of other things to do. And I want to thank Anna Mae, especially for putting this wonderful conference together, for our colleagues from the University of Rabat joining us, which is fantastic, and being rejoined by wonderful collaborators and co-conspirators like Arash and Ting-An, and also welcoming another collaborator, Christina Meier, who's actually part of one of the projects I'm going to be talking about. My disciplinary lens is actually very interdisciplinary. I work from three different arenas. One is media studies and media history; the other one is comparative literary and cultural studies; and the third is intercultural communication. And for the work I do with AI and numerous projects, I actually use all three lenses to try and make sense of something that, to me, really is summarized by two concepts, context and process. So what I want to talk about in the time that is given to me is really three different case studies projects where I try to engage these understandings of context and process, because I'm as much a learner and user and practitioners and critiquer of AI as we all are and I very much go, just to sort of go with Fiona's request for definitions, go with Ouassim's definition you know I'd really prefer to talk about LLMs, I'd really talk about pattern makers as the black box, algorithmically driven AI that we work with and fortunately or unfortunately, we are held to now use culturally and interculturally the term artificial intelligence even though was a fairly haphazard invention in the 1950s by John McCarthy and the Dartmouth conference just a little bit up north here in New Hampshire. So, the first project is a course I am currently co-teaching with Christina Meier on cyborgs, robots, and androids in the German imaginary, I will introduce that shortly. The second project is a special issue that I'm leading as the introductory essay on "Intermediality, Multimodality, and AI in Contemporary Literature," where I'm really trying to grapple with some of the topics that have already come up. What does it mean to be an author in today's literary composite to composition context, whether you're an academic author or whether you're a creative author or whether you're an author in the much wider sense of simply putting out content. And the third project is sort of, you know, a 3,000, 10,000 mile point of view whereby I think what is emerging already from the communications we've had, where are we as humans in this endeavor that is algorithmically driven, that is calculatory, and that is data driven, whereby I think we need to actually rev up how we communicate our humanities and our humanity in a data driven, calculatory, quantitative world. And so I'm going to talk a little bit about the Center for Humanities Communication that is Mellon funded and I'm one of four who are trying to get this off the ground. In this duality of context and process I feel often jostled between guidelines and literacy and we can talk about how literacy can also be defined as competency, are we talking about literacy, are we talking about competency, are we talking about awareness, how do we actually want to define that, and what I tend to go with is the Center for AI and Digital Policy; they have set out--I see oh wonderful, so we'll hear from you I hope--who've given us universal guidelines for artificial intelligence and guidelines to keep in mind especially when we start talking about ethics, when we start talking about access, when we start talking about intercultural awareness of different histories and different usages of artificial intelligence and LLMs and also number eight for example a public safety obligation, a cyber security obligation, prohibition on secret profiling, so all kinds of ethical, social, historical, and of course global aspects where AI policy and AI guidelines should be in place, and how do we operate as researchers, as humans, as educators in that on the one hand prescriptive use, critical use of AI, but also potentially regulatory and restrictive use of AI. The other definition that has guided me so far is by the National Association of Media Literacy Education, this is a US institution. The Center for AI and Digital Policy is very global and I've simply rewritten their definition of media literacy, "the ability to access, assess, analyze, evaluate, create, and act, working with all forms of generative AI." So that can be a working definition, something that may work for you and may not for you, may not work for you, and all I'm trying to sort of position here is from the three disciplinary positions I'm looking at this, is how bound can I be, do I have to be by ethical aspects globally, locally, as an educator, as a human being, but then how much to echo Anna Mae, can I also build, I want to experiment, I want to play, I don't want to just critique, so this is sort of some aspects that are important. So one of the case studies I'm presenting here today is our course where Christina and I have a lot of fun translating the German imaginary around cyborgs, robots, and androids into experiments with AI and into 2025. Let me just read a short description of the course so you kind of see the positioning of the human non-human. "This course examines the figure of the non-human human and explores representations of artificial beings in German literature's art and cinema. The focus is on representations of artificial beings in the German imaginary with a focus on issues of technology, art, philosophy, and subjectivity, media history, and theory, and intercultural awareness." So when we look at writings by Leibniz, his monadology, La Roche, the machinery of her writing table, Goethe's The Sorcerer's Apprentice, Kleist's The Theater of the Marionette, or Hoffman's The Sandman, some of you may be familiar with that, or Metropolis as the first movie that really plays with the idea of the automaton, the robot, in 1927. How can we translate these imaginaries, whether they are fears, whether they are fantasies, whether there's early sci-fi into the kind of real-world problem-solving around today's students' tasks and challenges and I should mention that this course is also part of my being a career fellow with the Career Center, so we're trying very hard to bring in the NACE competencies around teamwork, around critical thinking, around technology, around leadership, and I've chosen three to go with. So when Christina and I put our head together we're looking at designing a robot to solve a real-world problem by really translating The Sorcerer's Apprentice into student tasks of, well you know, what do you, what happens if you have a machine that goes crazy and the broom doesn't want to actually work with your prompt because you are not the magician, you are the apprentice and you're the apprentice of the AI, how do you make the machine not go wild and the black box actually somewhat maneuverable that you can work with a machine that you don't fill the house with water for those of you who are familiar with the poem or the Disney movie. We also based on the Theater of the Marionette, the text by von Kleist, have given the students prompts of cutting the strings, framing your competencies beyond the puppet stage so we don't just talk about cyborgs, robots, and androids we also sort of extend that into homunculi, into puppets, into the kind of imaginary around where is this human artificial interface, and what kind of intercultural and cultural imaginaries are wound around that. If you look at the history of puppetry, UConn is a perfect place to do that. You will see lots and lots of different formations of puppets around the world, whether it's on the African continent, the Asian continent. We had a colleague from LCL come in who's a 12th generation shadow puppeteer and talked about what puppets mean in his historical sort of meaning making, whether it's religious or whether it's cultural. So we really take this into all kinds of different arenas, but we then focus it on AI and the use of AI to sort of bring it into the students' everyday imaginary and practical usage for the classwork, but also for their training towards a career. And the last step was exploring creativity in AI, this was specifically, I think you mentioned it, or you mentioned it about prompt writing, how do we teach our students how to write prompts, this is not something that comes naturally. So in this course, this is something we very much are trying to integrate based on giving the students an understanding that these imaginaries are not new. Anxieties around AI, if there are such, are not new. And the experimentation of the machine-human interface also is not new. In Germany, it goes back with the homunculi into the 16th century and earlier, but if you go to some Chinese human interface, it goes back over a thousand years, thinking of automatons especially. The other project I'm currently involved with is, as I mentioned, the introduction to a special on intermediality, multimodality, and AI in literature. And we're sort of casting a wider net, because what I would strongly advocate for is a definition of media authorship in this age where we're not just authoring from a human individual, possibly in the past, sort of talent or genius-based position, we're collaborators, we're co-authors, and we have been co-authors for a long time. So when I defined media authorship in 2021 in a different article, I really want to call awareness to the multimodality, to the bricolage, to a getting out of "the tight framework of genius or talent of skills or viable access to the market...[m]edia authorship, I argue, emerges as fragmented, fragmenting, in process, multiple, and most of all, as conspiring with contemporary media." So, where are we as authors? No matter which discipline we're in, in that context of contemporary media. And some of us will embrace trying authorship and media authorship formats with AI and other media. Others will shy away. And Anna Mae, we just talked about policies. What kind of policies are part, in that restrictive versus creative fields of possibility of actually figuring out, well, what kind of author am I? And am I cognitively offloading? I know we'll hear about that a little bit, simply because there's a fear of writing, the fear of the imaginary, the fear of the constrictive, of the formal. So this is something we can talk about. But when we think about AI and literature and we think about AI and authorship in flux, I think it's very interesting to start thinking about how, what kind of gradations of AI usage can we attach to authorship in general? Do we have to insist, like the Authors Guild of the US, the sign of, the sort of, a domain of, written by humans for humans, or is that in fact too restrictive? Lastly, I would like to call attention to the Center for Humanities Communication. And this is, again, sort of a much broader circle of bringing the humanities alongside the sciences, and here, too, trying to bridge the gap between the two cultures that have been talked about for over 100 years now, but trying to figure out, well, how can we communicate about the humanities similar to how the sciences also have a field of science communication, especially at a time when the human becomes really part of the center around AI discourse. What is human, what is not human, what is hybrid, what are androids, robots, automatons, how do we define this kind of usage around the humanity, and I want to just give one example where we also should become more information literate in the humanities by making you aware of the fact that data actually is a humanities term. We use data on a daily basis, we're data driven, but data in the general culture of the 17th and 18th century still evoked specialized kinds of argumentation and the special situation of argument. As the etymology of the word indicates, data is the neuter past participle of the Latin word "dare" and it means very much an argument rather than what we today understand as data. So it's qualitative, it is not quantitative, if you go to the origin of the word. I want to close with a working definition of AI literacy that I've constructed for myself, because the context matters for AI literacy and the process matters for AI literacy. So for me the pursuit, if you're so inclined, and development of AI literacy depends on context, on access, and the hermeneutic and heuristic capacities to both critique and build. Thank you very much.
Episode 8, AI and the New Luddism with Arash Zaghi
Powered by RedCircle
Arash Zaghi (UConn) delivers his talk, “The New Luddism: AI Fearmongering as a Modern Mechanism of Oppression” at the What Are We Talking About When We Talk About AI? Symposium. October 9, 2025 at the University of Connecticut Humanities Institute.
Arash Zaghi examines how fearmongering from elite institutions has become weaponized against disadvantaged populations who could benefit from generative AI. While Zaghi cautions us that artificial intelligence itself is not absolved of any biases, he stresses the importance of being able to use this tech for the right now and beyond. Anxiety over artificial intelligence leads to lower usage and lower rates of adoption, ultimately leading to a widening gap between the haves and the have-nots. According to Zaghi, technopanic is a form of gatekeeping, operating as a functional form of oppression to curb access for less powerful groups.
Episode 8 Transcript
Arash Zaghi: So I'm going to ask you to please follow, if you will, follow my presentation without any bias. I don't have any expertise in AI, computer science, machine learning, my relevant background is just through immersion and extended use. So I started using LLM models when GPT-2 was out as part of one of my NSF-funded research projects, and I have lived the exponential curve from GPT-2 to 3, 3.5, 4, and now 5. So that is what informs essentially my presentation in full transparency. The research I did for this presentation, I used Grok Heavy, GPT-5 Pro, and Gemini Pro, and I used also AI to translate it in a language appropriate for this group. The case I am trying to make is that the narrative that is led mainly by elite institutions and billionaires around generalizing fear around AI is functionally oppressive. And the impact of that is it discourages the adoption of these tools by the populations that they can significantly benefit from that. I have my own, my job, independent of using AI or not. Half of my department, I'm from Department of Civil and Environmental Engineering, they are proud colleagues, my colleagues professors, that they don't have any AI accounts. And some they are proud that they have never touched it and their job is safe and secure. So I have to be very careful about this position of privilege that I am in when I talk about AI and cast broad definitions that AI is bias or AI is like this, and so because this may discourage certain populations that can benefit from that. However, I have to be very clear that I'm not suggesting that there are no risks, that AI doesn't have any biases. What I am calling fear mongering is speculative ideas about, oh, AI is going to replace us, AI is going to destroy us, I mean, the world is going to, I mean, environmental impact of that, magnification of that, or some kind of speculative ideas about the energy consumption and things like that, and sweeping judgments about AI. That's what I call fearmongering. So as I mentioned, I think it's now two years ago that there was a letter that was signed by, I don't know how many billionaires that we have to delay the release of these models. And Elon Musk had a very big role in leading that movement, essentially, to delay the release of this model six months. I mean, to, and again, a lot of academicians went behind that, signed that letter. And then later on, people realized that, okay, he wanted to buy some time to establish xAI, secure the funding and computational resources to be able to catch up with the competition. So we have to be very, very careful, not immediately jump on those kind of like, again, there is, there's always some underlying agenda behind those orchestrated fearmongerings that we hear. And it can be extremely harmful to vulnerable populations. Again, I may say, I don't care, I care, it may make me pause for five minutes, but a a middle school kid from Africa, let's call it, hearing something like that may never touch AI. I mean, for example, even in an affluent community in Mansfield, here where we are, my daughter, she's a sophomore in Mansfield Middle School, actually she's now in high school, is very hesitant to use AI because of all of the concern that the teachers, they have brought up. So again, we are talking about this affluent community. And I have to check my notes. And this anxiety obviously translates into lower usage and lower rate of adoption. So the data says something very interesting. Again, if we just follow the news in US and US-centric news and Western-centric news, we think that every country is as worried about AI as we are. That is not the case. Actually, a lot of countries, communities, surprisingly, actually, communities with low access, that they are receiving some AI kind of generated text through SMS, through text in Africa. They are embracing it. They're trying to kind of use it to upskill themselves, to learn things. And they are benefiting from that on a daily basis. And again, there are larger scale studies, surveys that they clearly show that. And it's interesting that, again, if you follow the English-centric kind of news outlets, we think that everyone is as concerned about AI and the future of AI as we are. That's not the case. So we have to be mindful of that. And why I say that fear mongering may discourage disproportionately? Again, it is totally fine to talk about the challenges with AI in groups like this. I mean, that's what we do. As scholars, we dissect something, we put it back together. That's what we do. But when I go out and, for example, imagine I talk to high school students. I have to be a little more aware of the position of privilege that I am in when I present my case. Because it may have unintended impacts on communities that are vulnerable. Communities that are vulnerable, I mean like this is something, how am I doing in terms of time? Good? So I'm going to tell you one story. As part of one of my research projects that was more related to engineering, we were doing some outreach activities, going to different middle schools. We went to Mansfield Middle School. We had handheld microscopes, like $12 microscope we gave them, and then they took it home. So we set it up, put it on the tables, students came in the room and they just started opening the box, playing with it and then we started the conversation that's okay these are the samples you want to try, these are the ways you can kind of like magnify or things like that. So, and two weeks later we went to Bridgeport, the same thing the same setup. We started the session. And again, they were sitting all like this, students. And then I invited them, ok, take it out of the box. And they were so hesitant to touch this $12 microscope with the fear of, oh, what if something goes wrong with it? So we have to be very, very careful that a message around the same tool may land completely different in a way, depending how vulnerable that population, that community, that person feels. I mean, what's the margin of error that I have? I mean, like how much I can kind of like, if I don't have kind of layers of safety, I've become very conservative. And if you fear monger on something, I'm not going to touch it because I don't have much, again, I'm not sure that the system is going to protect me if something goes wrong in that situation. And again, there is data that the chilling effect, discouraging people to learn certain things, that's very important. And we have seen that, actually, over and over. And unfortunately, Anke briefly mentioned that may lead to a second digital divide, actually, if we continue doing what we are doing in academia. What we are doing in academia, most of, again, I follow a lot of kind of news outlets around AI. It's majority negative, majority around problems, challenges, bias, risk, I mean, like, and again, for internal consumption, that's important. But when we go out, we have to maybe adjust the tone a little bit appropriately. So that brief formula at the end, that if we continue doing this fear campaign, plus the structural vulnerabilities that we have built in our society and globally, it can lead to some psychological permission collapse that is, again, unintended, but it can lead to a major problem of digital divide. And it's important because we are talking about drastically putting the next generation of our students at disadvantage if we discourage them not to use AI. I just want to, again, for the sake of time, I just want to pay attention to the third part. This data is very recent, that 66% of hiring leaders, actually, they are reluctant to hire someone which has no experience with AI. And 71% prefer less experienced people with some AI background compared to more experienced people. This is how important it is. So it's not about if I'm going to be more productive or more creative, it's am I going to be able to find a job after graduation, after finishing high school, if I remain AI illiterate. And I bring it back home and talk about now connected to AI literacy. I think our approach should be safety without exclusion. Definitely, that's the way to approach AI literacy. And I'm going to go as far as suggesting that AI literacy should be a civil right. We have to advocate for that as a civil right. This should be a very basic right for all individuals, and we have to, my colleagues actually from Rabat had mentioned, that low resource models, they are very important when we talk about low bandwidth, like internet access. Again, they are creative, they are even using basic communication tools like SMS, but still like they have to be very careful, these heavy models that they require a lot of bandwidth with may not be as accessible. So I'm going to stop here, I hope I'm doing good with time. And I have to tell you that I intentionally tried to maybe go a little to the extreme on fear mongering the opposite way. But I definitely have a case here to make. And I did it intentionally because I wanted to balance out the conversation here. So I look forward to the rest of the day. Thank you.
Episode 9, Big Human and the Sociotechnical Turn in AI with Ting-an Lin
Powered by RedCircle
Ting-an Lin (UConn) delivers her talk, “Rethinking ‘AI Literacy’: Towards a Sociotechnical Conception” at the What Are We Talking About When We Talk About AI? Symposium. October 9, 2025 at the University of Connecticut Humanities Institute.
Ting-an Lin outlines how she defines AI literacy through a socio-technical perspective which prioritizes a people-focused vision of AI as opposed to a tech-focused one. The socio-technical perspective emphasizes the material context in which AI exists and operates. In other words, an AI literacy that only prioritizes those in “Big Tech” will lose sight of the material conditions in which natural resources, human labor, and data are extracted. For Lin, understanding those conditions is intertwined with breaking down the global divide over AI and expanding the ability of the technology outside of the elite and powerful. The dangers of technological determinism cannot be understated, especially amid the AI boom.
Episode 8 Transcript
Ting-an Lin: Hello everyone, I you can hear me clearly. My name is Ting-an Lin. I'm an assistant professor in philosophy here at UConn. So I want to begin my talk by thanking everyone who helped organize this event and who come to this event to make this conversation possible. I also wanted to thank the Humanities Institute here at UConn to sponsor this project, to make their generous support for this project. So as a philosopher, my work, the overarching theme of my work is to examine the impacts of social structure, different members in the society. And I'm hugely inspired by Iris Marion Young, a political philosopher, a feminist scholar, her understanding of social structure, which can be described as a, kind of background conditions that influence in the kind of options presented in front of individuals and thereby shaping their actions. I think it will be appropriate to describe the current background conditions that we situated as the social technical structure, which just to highlight the significance and influential role that technology has been playing in shaping the social structure. And together with dynamic interactions with many other elements in the structure, from the norms, the schemas, and some of the material conditions and the resource distributions. So in my work, I've been examining AI from this socio-technical perspective, by which I mean that we should recognize the design deployment and the governance of AI systems are situated in this broader socio- technical structure and have played a huge role in shaping that in both directions. In the past, over this past few years, I don't think I'm able to click it for now, I've been published a few work from this perspective, specifically to examine the issue from algorithmic bias and fairness to democratizing AI, how do we make AI more democratic? And today in my talk, I also wanted to suggest that we can rethink about this notion of AI literacy from a socio-technical perspective and suggest some insight can be drawn from there. So I think over the past few years, we can observe an increasing emphasis on the importance of AI literacy, which, as Arash mentioned, and I think I'll agree with that, I think AI literacy is super important. And however, the overarching question I wanted to ask, and especially as a philosopher, I wanted to ask, how is AI literacy currently understood? And whether there are any issues with the current dominant notions and approaches surrounding AI literacy? Here's what I want to argue. First, I'll argue that the dominant approaches to AI literacy tend to view our systems as powerful technological tools and the goal is to, the goal to teach AI literacy is understood as teach more people the kind of suitable skills to use these tools, maybe use it wisely, responsibly. I refer to this conception as the tech-focused AI literacy. I'll then raise some concerns about that and propose an alternative notions of AI literacy, which I call a socio-technical AI literacy, which will better recognize AI system's socio-technical nature. So here's the definition of AI literacy, that I drawn from the handbook recently published with the collaborative efforts by EU and OECD. So they define "AI literacy" as "to represent the technical knowledge, durable skills, and future ready attitudes required to thrive in a world influenced by AI. It enables learners to engage, create with, manage, and design AI, while at the same time critically evaluating its benefits, risk, and ethical implications." So I think it's a very nice definition and quite representative of what I wanted to quote as the dominant tech -focused conception of AI literacy. So I think I take this as the general kind of narratives or rationale behind why AI literacy is important. The idea goes something like, oh, AI systems are super powerful, right, it's very helpful technological tools. And therefore, we wanted to ensure that more people can have the suitable skills to use that. And so therefore, let's teach more people to use that. You teach them to use that in a better way. And some common motivations come from, as Arash also mentioned, about the AI divide, a global AI digital divide. Or another really common rationale is that we need to prepare students so that they can become more competitive on the job market, right? Both wanted to hire people who are AI literate. So at the same time that we will observe some kind of practices, which I think it's dominantly people understood AI literacy as like, embrace this technology to incorporate that into the classroom and even people will suggest that if we not embrace that it will be a opportunity missed and then we see university also partnered with Big Tech to introduce those tools and allow more students to use that. So I don't want to deny the value of that. I think this is important, as Arash also pointed out, if we just leave that out it can increase some kind of disparity. But at the same time I want to raise some concern if we take this as the only notions or the primary notions of AI literacy. Here are just some other pictures that also reinforce this, why this is like tech-focused, which we see from different universities, they put like Bloom's Taxonomy, things like that. I want to move to some of the concerns that I have if we treat this as the primary notions of AI literacy. The first is that it will tend to obscure the materiality of AI, right, as Crawford and Joler and many other scholars have emphasized, that currently people tend to, when people talk about AI, they treat it as some of the technical stuff, digital theme, things in the cloud, which at the same time that it is obscured, that AI systems also rely on important extractive processes, rely on extractions of huge amount of natural resources, human labor, and data. So in a way, in the tech-focused AI literacy, when people teach AI and emphasize a lot on how we understand AI as deep learning and how to interact with algorithm, we risk to further obscure those material conditions. And at the same time, they obscure many of the related ethical issues that is happening in the background of AI system to allow the AI system to work, from the natural resources that's consumed to many of the under-recognized labor behind AI, who is labeling, annotating those data, and many of the work doing the underground infrastructure thing, related to their exploited working conditions, as well as some of the ethical issues surrounding AI that go beyond just bias or fairness, but also how the current power of collecting and owning data, how is that concentrated in the elite field or the powerful field. The second concern I have about this notion, tech-focused AI literacy, is that it promotes a sense and ideology of techno-determinism, which is a mindset that treats technological development as something inevitable. As some of the headlines in the news suggest, artificial intelligence is complicated, but it's something inevitable. So we need to embrace that, otherwise we will be at risk, just like a left-out or our job will be taken by someone else who would know how to interact with AI. But it also contributes to a kind of AI hype, which is the over-excitement surrounding AI. And I think that the ideology of a technological determinism is definitely worth critiquing, right? Technology does not just exist there as a given. How the technology is designed and which direction we want this technology to develop is something designed by human and what is currently happening in our society is that those decision is made, is only made by a smaller group of people. And this relates to my third concern about this notion of AI literacy which is that it worked to reinforce the existing techno-centric power imbalances. So again, a kind of rationale suggests that we need to teach students so that they are job ready. It doesn't really empower them, right? It creates another sense of fear, that they fear to be replaced, they fear to be left out, but this only serves to reinforce the current games of role in the market, right? It's like the boss define who is a competent like a job candidate and that is and also when a university like a partnered with those Big Tech, who gets benefits? It also reinforced, it allowed some of the companies to get their tools to be like have the broader adoptions right and that is why that's there, as a podcast pointed out, it's currently a race among those Big Techs trying to partnership with the university to own this AI literacy competition. So I hope that, again, I think the techno -focused AI literacy or some of the technological skills are important, but that should not be the only story or the only aspect that we focus when we think about AI literacy. So what would be the alternative? Recently, some scholars have urged to reflect on the dominant notions and suggest that, for example, we need to also consider when not to use that or to expand the critical examination to the broader structural and political dimension surrounding AI, usually under this label of critical AI literacy. I wanted to contribute to this conversation and propose what I prefer to call it a socio-technical conception of AI literacy. The important thing from this notion is that first, we should better understand or recognize that AI systems are situated in and closely intertwined with many components in a social technical structure. And the goal of teaching AI literacy should go beyond just close the digital divide or allow students to get a good job, but also to empower them, to really empower them to examine what's the existing power dynamics that's embedded in our social structure, and also empower them to envision something different, to exercise their agency in shaping it. So I want to point out three directions that I think this socio-technical notion of AI literacy differ from the dominant one. The first is when we teach students the nature of AI, we should go beyond the technical skills and technical components of artificial intelligence, but rather to enable a more comprehensive understanding of the socio-technical nature of AI systems, including its materiality and how it situates in its broader structure. So again, Crawford and Joler's work, "The Anatomy of an AI System," I think is a very nice example and that should be included in a curriculum when we're thinking about how to teach students about AI, right? And their work can not only be found online, but it also presented as exhibitions that took place in many of the museums that including at New York MoMA, and I think it's a great way to draw the awareness of the general public to reflect on what's the nature of AI that beyond there's something just in the cloud. Another example I want to share is a work lead by We and AI, an MPO based in UK. They have this project, they run this project called the "Better Images of AI." So they're pretty much concerned and aware of the dominant image of AI. If you do a Google search, it will show you a series of message of robots or things in a human -like super intelligence, which, again, is obscure of what AI really is. So We and AI collaborate with designers, artists, to create a different series of image that better represent and recognize the material conditions, including the natural resources used and also the labor behind the production of AI systems. If you go to their website, there are a gallery of all these images, which again, I think is a wonderful resource for teaching this notion of AI. Furthermore, in terms of the content to be included in AI literacy and the goal of that, I think that we should also empower the individuals to better understand and examine what's the current power dynamics that we are situated in, right? So as scholar like Shoshana Zuboff quote now we are currently in the digital economy which she called a surveillance capitalism, where data is the new currency, and then we need to inform our students and to the general public how that data is currently like them monetarized, and how it's been collected, and why this kind of a power is only cluster in a handful of you, and how this current system, that's reinforced a form of algorithmic dominations, or colonialism. And lastly, by better understanding the current systems, we can empower individuals to think otherwise. We should empower them to envision alternative practices from AI design to deployment and also to governance. So a few scholars, the panelists, also mentioned about, we need different forms of design as participatory. And I think that we should also go beyond that to think about how, who are making the decisions surrounding AI. The current dominance of, the current power of AI governance is clustered in a handful of few, but at the same time that many people in a civil society in different parts of the world, including indigenous community, they are taking initiatives to do something different, and those should get more attention and should be included in our curriculum so that we can enable and empower people to think otherwise. So those are some of my preliminary thinking about this notion of a different notion of AI literacy, and by sharing those with you I hope this serves as an invitation for more of us to reflect and then explore further. Thank you so much.
Symposium
What are we talking about when we talk about AI? An International Symposium took place on October 9, 2025 in the Humanities Institute Conference Room. Talks from the symposium will be available on YouTube and as a podcast. Read about the symposium in UConn Today.

