A Glossary for Human-Centered AI
Reading Between the Lines
“Reading Between the Lines: An Interdisciplinary Glossary for Human-Centered AI” brings an international cohort of humanists, engineers, and scientists from UConn and the Université Internationale de Rabat into conversation through an in-person symposium and a series of podcast dialogues illuminating how the definitions of terms associated with Artificial Intelligence vary widely by discipline, location, and language. The symposium and the podcasts will be structured to address the challenges that language and translation (both conceptual and linguistic) pose to collaboration on AI research.
This project is funded by The Consortium of Humanities Centers and Institutes’ Human Craft in the Age of Digital Technologies Initiative.
We often use the same words—like ‘learning’ or ‘intelligence’—when we are talking about AI, but what those words mean depend on our own academic and cultural background and the assumptions that accompany them. The humanities bring crucial insights about language and meaning that can help us to engage these gaps in constructive ways.
Podcast
“Reading Between the Lines” will explore questions of language and AI through four podcast episodes, each on a specific theme like education or justice.
Episode 1, Justice
Powered by RedCircle
Can artificial intelligence one day help to mitigate systemic inequality? As part of UConn's Human-Centered AI Initiative, we've brought together a roundtable of scholars from Connecticut to Morocco define “justice” within their discipline. It's only by understanding what we mean by the word justice, that we can begin to work together to deploy AI to create a more just and equitable world.
UConn Humanities Institute and professor of English, Director Anna Mae Duane leads a conversation about AI and justice, featuring Meriem Regragui, professor of law, Université Internationale de Rabat; Ting-An Lin, assistant professor of philosophy, University of Connecticut; Avijit Ghosh, applied policy researcher at Hugging Face and associate researcher at Riot Lab, University of Connecticut.
Episode 1 Transcript
Welcome, everyone, to our first podcast in a series dedicated to reading between the lines. “What are we talking about when we talk about artificial intelligence?” This series of podcasts is a collaboration between the University International de Rabat in Morocco and the University of Connecticut here in the U.S. It is the result of a Mellon-funded CHCI grant, which we’re putting the humanities and the sciences and law and medicine in the same room to think about what do the terms mean, when we are thinking about AI. In some ways, this is an anti-glossary. If a glossary is a place where we all decide on one definition and go from there, we are much more interested in leaning into the fact that we all have different definitions of terms that people feel are self-explanatory. And if we’re going to really have an expansive, inclusive, liberatory, emancipatory AI, which is all of our goals, we need to start with how do we talk about it. So I will introduce myself. I’m Anna Mae Duane, Director of the Humanities Institute at the University of Connecticut. And I’m going to ask our esteemed guests to introduce themselves briefly. Meriem, would you?
Thank you, Anna Mae. I’m Meriem Regragui, professor of law at the International University of Rabat, the school of law, and the deputy director of the school for global studies. I’m specializing in contracts and consumer law and with specific focus on social justice.
Hello everyone my name is Ting-An Lin, I’m an assistant professor of philosophy at the University of Connecticut. Before joining UConn, I was an interdisciplinary ethics postdoctoral fellow at Stanford. My specialty are in ethics, social political philosophy, and feminist philosophy, with a special focusing on how to address the unfair constraints that the social technical structure impose on different groups of people.
Hi everyone, I’m Avijit Ghosh. I’m an applied policy researcher at Hugging Face, where I focus on AI evaluations and US AI policy. And I’m also an associate researcher at the Riot Lab at UConn, where I work with Dr. Shiri Dori-Hacohen on a few different topics, ranging from morally aligning AI models and all the way to information access on social media and how that impacts fairness and ethics. Before coming to UConn, I was a PhD student at Northeastern, where my focus was on how fair models break down in the real world because of lack of information. So this is a very interesting topic to me.
Professor Duane: I should say our topic today is justice. Again, a word that has its own reverberations in law and computer science. I’m a literature professor. So I’m going to start just by jumping in and doing the definitional work. And so I’ll just ask everyone in a minute or two, how do you define justice in the work that you do and the discipline that you’re in? And I’ll just start as someone who has taught African-American literature and works in disability studies. “Justice” often has this connotation of repair, that you need sort of this historical background to bring to the present day to truly achieve justice and fairness, because we need to understand the injustice of the past to really engage with justice right now. And I’m just going to go around the people I’m seeing. So, Meriem, would you?
Professor Regragui: I think one of the major challenges for us is defining “justice” because it’s a concept that lies in the fact that it must be grounded in a moral or in a ethical framework, which can vary significantly across cultures and belief systems. Yet to despise these differences, a common thread can emerge and we can see that justice is commonly referred to as this mindset of this argumentary around the fundamental objectiveness to giving each person what they deserve, so basically the way I see it. And on the other hand, we can see it also as an institutional dimension, that is with the way that states enforces and organizes the rights and obligations of its citizens and maybe also in another way in another dimension, the one with the protection, as a social justice it’s more precise as we can maybe see it later how John Rawls have seen it, but we, I like the definition that has been done by Professor Yuko Itatsu, she’s a professor at the Tokyo University and she describes social justice as the mindset that’s driven with the assumption that all people regardless of their upbringing or their circumstance should be able to live with human dignity, so this is the emphasis on dignity, and which I found really crucial. Professor Duane: I love that, that dignity is this cute word that’s emerging. Um, Ting would you mind weighing in as a philosopher?
Professor Lin: Sure, so in the discipline of philosophy, uh, which is my main area, the discussions surrounding justice used to be dominated on this perspective or approach to propose theories of justice that based on the more ideal scenarios. However, over the past few decades, feminist scholars and critical race theorists have criticized such a tendency. So this suggests that people like philosophers, like philosophizing from their armchair, they have spending so much time and just try to debate with each other about what a more ideal account of justice will look like. But our world is not like that. Our world is more a non- ideal. And so instead, they suggest that we prioritize focusing on and investigating some of the real world issues of injustice. People might disagree of what justice look like, but it’s easier to find and identify. And we agree that those are issues of injustice that we should pay attention to. So my research also, like, take this kind of perspective and one of the notions of injustice that I engage more deeply is this idea called structural injustice, which is coined and made popularized by Iris Marion Young, who is a feminist scholar and political theorist. According to Young, structural injustice is a type of moral wrong that’s imposed by social structure, or currently we might describe that as a social-technical structure. Structural injustice will exist when the impacts of the social-technical structure systemically impose some of the undeserved burdens, and burdens we can understand that very broadly, including like oppression, violence, or exploitation, etc., that impose those kind of burdens on some groups of people and while at the same time comfort unearned benefits or privilege to others. So according to this view of structural injustice, the goal is to address those kind of unfair constraints that social structure imposed on people and then through some of the collective actions to address that to make it like less unjust in the future.
Professor Duane: I love it Avijit, as our resident computer expert, tell us a little bit about how you see justice in your work.
Dr. Ghosh: Yeah, I’m glad you specifically said that because I was also going to mention that in the spirit of this being an anti-glossary, I would like to point out that legal definitions of justice, which again vary differently from everyone’s moral definition of justice. So for me, I think social justice equates to what we understand to be principles of equity, everyone should have the same opportunities as much as possible. Society should make sure that we should have similar outcomes if we put in similar efforts. And this is also coming from the fact that I grew up as a queer person in India, so like I didn’t and still do not have the same opportunities available as you know heterosexual people do in terms of marrying or having kids and property rights and whatnot, but that you know is very different from technically applying the concepts of equality in machine learning. And there are certain things that you can’t even do. Like, for example, it would be considered morally and ethically wrong to use machine learning to identify if somebody is queer from images of their faces. That came from a very controversial Stanford study, which, somebody tried to build a gaydar and for various other reasons. So even if you start off with a goal to make sure that you are scanning people’s faces to make sure that queer people in your study are getting the same outcome as straight people, in that process you might be violating other aspects of their existence like inadvertently outing them to their workplace which might have other impacts. So you know again applying certain notions of social justice might not even be possible or feasible without self-identification of the sort, which is a problem I ran up against a lot during my PhD. So like I tried to measure whether fair ranking was actually working in practice using LinkedIn’s algorithm. And I realized that, even though the algorithm’s mathematical definition made sure that people from different subgroups, like different races and genders would be equalized in terms of their, their relative ranking in the end, that kind of re-ranking was not even possible if people did not self-identify their race or gender. So my understanding is that companies were using people’s faces or their last names or their zip codes to sort of identify demographic information about them, which was not entirely correct and was misidentifying people. And finally, we come to the different definitions of equality or what different notions of justice look like in different judicial systems, which sort of then influence also the technical definition. So, for instance, in the US, you have the notions of, you know, you can’t do something that is called “race- norming” because the Supreme Court has disallowed it. So in the process of ensuring fair outcomes, you cannot treat people differentially. That has been established by the Supreme Court. But then in other jurisdictions like in India, for instance, you are allowed quotas for like minorities, for women, for people from the scheduled caste, scheduled tribes, and that is in the constitution, and therefore, any sort of like, you know, resource apportioning algorithm would want to take that definition in the law into account. And so writing a fair algorithm that supports Indian law looks very different from the one that follows US norms. Um, so it’s it’s very different based on where you are, based on what your local norms are. Um, I personally think that algorithms should not be you know copy pasted across borders, it should be context dependent, and that context should also include the laws of the country that you’re operating in.
Professor Duane: Oh that’s so important and you just led us to a beautiful segue because another part of our anti- glossary here is that we have folks from all over the world, this is a collaboration. I’m sitting here, I’m at the University of Connecticut, but I’m sitting in Morocco. And how the legal structures interpret justice completely differently, as we add on the sort of social, cultural, individual aspects, we have sort of completely different laws. And to bring this to AI, as we’re thinking, as you already spoke so eloquently about, Avijit, uhm so often, I think AI, or at least the way we think about it, it’s a largely Western construction and Western ideas of justice itself. And just to sort of pull together what folks were saying, I think there’s a Western bias in AI as we’re trying to develop regulations, as we’re holding new platforms, um thinking about, um the ways we just, and, and uh Ting spoke to this and Meriem spoke to this, and Avijit your point about this idea that if we treat everyone the same, if we pretend if everyone is equal then that becomes justice and uh certainly with AI, there’s this tendency to flatten folks, right, that there’s this data and that it’s, it’s you know statistically what’s most likely, is what’s going to be the output. And your example was so perfect, Avijit, how do we take in the diversity and the fact that what is justice for one person is not justice for another person and move that into our programming. And I think, Meriem, you mentioned John Rawls and just to sort of tie some of these things together, he talks about the “veil of ignorance.” Imagine as if you have no identity, which already starts us off, I think, at a disadvantage. Or just imagine yourself in several different identities and what would you want for that person and it’s, it’s sort of this act of empathy, but you’re also imagining what that person would want of course always from your own position possibly of privilege. And he suggests that like, if we don’t, you know if we don’t have any position, then we will choose justly for everyone and I think what I’m really interested in thinking about is, what do we do with this idea when it comes to artificial intelligence, in terms of property rights, in terms of social justice, because I do think, at least on the technical side, there can be this presumption that we’re doing things equally, that everyone is being coded equally, it’s the most likely outcome. If we have the majority thinking one way, or most likely to react one way, then that’s how AI is coded. And then these gestures towards diversity can wind up actually outing people or sort of making them an anomaly. So I, I wanted to think about AI from two different ideas of justice as well. Martha Nussbaum thinks about the social contract as basically flawed because it imagines there’s two equal people, uh right, and you make a contract because I can hurt you, you can hurt me, I have something you want, you have something I want. But of course, right, that’s not how society functions. And I think in terms of as we’re thinking about justice and artificial intelligence and large language models, how do we imagine the ways in which we’re not equal, right, in which that social contract doesn’t work as a way to think about how we program, how we elicit. And I think particularly, I’m wondering, I am sitting here in Morocco, and you’ve already spoken about India, Avijit. What happens when we’re dealing in a global context in which people do not have the same aspects of access, literacy, and then, of course, the default that so much of artificial intelligence is thought about and programmed, and the data comes from English-speaking societies. So how do we sort of accommodate this, which everyone already kind of spoke to really beautifully, this idea of we need to realize that there isn’t equality, that we’re not all data bits that can be, you know, interchanged while sort of avoiding what Avijit was talking about, in which people are being sort of either sidelined, sort of an exception case to this generalized intelligence. There’s this dream that there’s going to be this universal intelligence that’s going to accommodate all of human nature equally and fairly. We have our doubts, and so I want to just throw one more key word into the mix and then sort of ask your thoughts about it. “Decolonial AI justice,” which challenges, you know, both Rawls and Nussbaum in some ways. And I’m going to hand this over to the experts in the middle. My understanding of it is that AI systems are no different, unless we’re very careful about it, than other systems in ways that are going to just reinstitute, perpetuate historical imbalances that were rooted in colonialism. And so one argument is that we need to de-center Western ideals of justice, Western ideals of fairness and sameness, and include other philosophies, other epistemologies. From the very beginning, our colleague, Shiri Dori-Hacohen, talks about expansive AI, that we need different perspectives, not just disciplinarily, but globally from the very beginning as we’re programming AI. So I’m wondering for each of you, how in your own work, are there hopeful stories of ways in which you see really exciting moments in which decolonial justice is operating, or trying to be implemented, or any challenges you see with this sort of more expansive vision of justice? I was just so struck by the way when you were all defining your own visions of justice, everyone had this, we think it’s one thing, we think it’s equality, but if you really want justice, you have to acknowledge that we’re not starting at equal moments, that we are not the same, and that actually the same result is going to be injustice for some people. So I’m wondering in your own work, how are you trying to implement this version of justice? And I’ll start with Meriem again.
Professor Regragui: Okay, well so, the term justice, as it is so polysemous, in my research, I often try to identify its presence across a wide spectrum of legal phenomena. And I began by exploring how justice manifests in contract law, primarily through the notion of contractual balance, which refers to the fairness between contracts and parties in terms of rights and obligations. And I discovered that in certain legal systems, notably the civil law tradition rooted in the Romano-Germanic model, balance is not considered a condition for the validity of a contract. And in contrast, common law systems, especially in the US case law, they developed a concept quite foreign to the civil tradition, which is the doctrine of unconscionable contracts. And I found it very interesting because this doctrine allows judges to intervene when a contract is significantly unbalanced or contains an abusive clause in order to restore a sense of fairness. However, in our civil law system, judges remain bound by the letter of the law and the sanctity of the party’s agreement. They are not empowered to question the substantive fairness of the contract. And contractual freedom is treated as a foundational principle of what two parties agree to, only they can undo. So I found in common law an instrument that we lack presently, this is the legal mechanism for ensuring contractual justice. But contractual justice cannot be reduced to mere balance between contracting parties narrowly defined. In reality, contracts produce externalities, both positive and negative. And some agreements such as employment contracts, rental agreements, or medical consent forms can have life-altering consequences. These are not just economic tools, but social institutions. Their impact extends beyond the two parties involved and disproportionately affects the more vulnerable party. And in this sense, certain contracts are key pillars of socioeconomic stability or conversely, vectors of inequality. So this is precisely where contractual justice meets social justice. Now, what does this have to do with AI? And in my point of view, recent technological advances offer us an opportunity to rethink how fairness is built into contracts upstream. I believe AI could be used to design more equitable contracts by default, whether it is in B2C or B2B context. If AI can help us detect imbalances, flag harmful clauses, and it will simulate social impact before a contract is signed, we could mitigate systemic injustice at its source. This could profoundly affect the socioeconomic status of vulnerable populations and move us closer to a more socially and economically just society. So in response to the question of what other definitions of justice we should consider, I would suggest that the contractual justice that is ensuring that the performances exchanged between parties are fair and reasonable, it must be part of our discussion. And moreover, we must also acknowledge its direct and indirect relationship with broader social justice, especially when designing AI tools that increasingly influence our legal and economic environments.
Professor Duane: Thank you so much. If I could just ask one follow up question. I do wonder how AI can spot these vulnerabilities in terms of who’s programming it and sort of what, how do we sort of identify what flags to look for? Because, I mean Avijit’s example is like, that could go wrong in many ways, right, that you’re flagging that this is unbalanced based on you know identifiers that maybe aren’t relevant or maybe actually do harm. So I’m wondering if you’ve heard examples of that as a challenge or ways people look to deal with it.
Professor Regragui: Absolutely. Absolutely, it’s a great challenge because it depends on how we program the AI and how we use it. So, since we are still attached to our old notion of fairness, old notion of contractual justice, nothing will change, even if AI evolves, even if we are currently experiencing a great development of technology. But on the contrary, if we tend to change the way we program AI in order to aim towards this social justice and this contractual justice, maybe we will have the capacity to have a more fair contract between, let’s say, between employers and employees, between the medical staff and the patients, etc. So, it has to be thought beforehand, because it’s the only way I see we can provide a more balanced and fair contracts in the market, in a larger way and specifically between particulars.
Professor Duane: Yes. Oh, thank you. I know both Ting and Avijit have thought about exactly these issues. Well, whoever wants to jump in first, both sort of your ideas of de-centering Western justice in AI, but also responding to Meriem’s like really excellent point, abut how do we achieve balance and identify vulnerabilities?
Dr. Ghosh: I mean, I can take a stab and I’m sure Ting’s perspective will come from a completely different angle, so we’ll compliment each other. For me, machine learning research has been bottlenecked by availability of resources for the longest time. So, for the longest time, their talent is spread everywhere across the world, like I have no doubts about that. What I do see as a problem is that GPU access, for instance, is concentrated in American and some Western European UK institutions, meaning that even if we all know that data sets are heavily Western biased, it’s not like something we feel is true, in fact, there has been systematic studies. So, I encourage the listeners to look up the Data Provenance Initiative. It’s a project by my friend Shane at MIT who, they like went through thousands and thousands of data sets and, and plotted a world map and most machine learning data sets are coming from the Global North. So, when we do know that it’s true, how do we alleviate that? Thankfully this year, I mean I would say like starting 2024 onwards, there has been a slow but steady move of decentralization of both data sets and models and in fact in 2025, we are seeing an innovation, you know, boost towards smaller and smaller machine learning models that run locally on your computer or that runs on GPU resources that are relatively cheaper and can run on existing hardware. As a consequence, we are also now seeing models come out of our fine-tuned models, come out of say India, come out of Middle East. There is a project out of Dubai, although Dubai is not a great example because they are not resource poor. But again, we are seeing initiatives like the Falcon family of models coming out of the Middle East. We are seeing different Indian institutions producing their own models. China has been producing a lot of cheap models that run locally on your computer. So I would say that resource access equalization has been a big bottleneck that is starting to be improved. So, the world now doesn’t constantly depend on what OpenAI will do and let that dictate the fate of the AI research field. And consequently, anyone who is impacted downstream, people are taking matters into their own hands. More and more public institutions are releasing AI-ready data sets. So there has been initiative from different governments, US departments, so NASA releases data on Hugging Face, where I work, for instance. But also now we are seeing a tendency for international government. So the National Library of Norway releases data on Hugging Face. You have the French Ministry of Culture that released a very detailed French language conversation data set that can help models converse better in French. And I’m hoping that with the release of more open source scripts to do so, more and more small organizations and institutions across the Global South will now be able to contribute in the same way, that’s part one of the equation. Part two is better and more open evaluations. Again, like circa 2023 when [Chat]GPT first came out, well, 2023 when [Chat]GPT first came out, people didn’t really know what the model had been tested against before release, and for-profit companies don’t really have an incentive to do so. Like, they want you to use their model, they will not actively talk about the fact that it has negative impacts or negative evaluation results. With the advent of open source models catching up to closed source models, there’s also been a rise of an incredible diverse evaluation testing system. Evaluation and, and evaluation like just a testing ecosystem. So you have leaderboards, you have benchmarks coming out of different companies, including like in Indian languages, including like say, I think they’re like misinformation, detecting data sets in Arabic, for instance, which is incredibly useful. And these are coming out of small local organizations that can now do this type of testing more easily on open data sets. And I think people are waking up to the fact that evaluations are extremely necessary for trustworthiness in models. I’m leading like this 150+ researcher coalition called the EvalEval Coalition, or critically evaluating evaluations itself, like the field itself, and we are seeing incredible progress there, so we are working on standardizing evaluation documentation, writing about eval science, like what constitutes a good evaluation, and just generally I think these types of evaluations of whether a model is biased, whether a model is, is doing the same things in all languages, whether the model is giving misinformation in like asking for legal outputs and whatnot, um uh we, our team at Hugging Face including external researchers released a dataset called Shades. So, I think some of us know already about the Gender Shades paper that was quite monumental in the ethics field. This is sort of following that same trend of releasing stereotype datasets, so we’re releasing stereotypes across different languages. So evaluations are the second piece of the puzzle.
Professor Duane: That’s fantastic, and we will add it to the show notes. We’re already building this incredible reading list. So what I’m hearing is that we need diverse people, diverse perspectives from the beginning and that’s just so exciting to sort of get that overview of the ways that that’s happening. Ting, I’m going to give you the last word as our philosopher, who thinks of all of life. So what would you add?
Professor Lin: Yeah no um, I think I will add something like, I mean as a philosopher I work more on the, kind of a theoretical side, but um I do want to, I really like the other previous points that every one of you that pointed out, and then tied it to Anna Mae’s original questions about how those different notions of justice, how do we see them? Are they competing, or how should we prioritize them and related to AI? And I wanted to propose this idea, as I mentioned earlier, that my work is hugely inspired by this notion of structural injustice. I really wanted to suggest that this perspective, when we think about social technical structure, and we understand that in a broader way, it opens up this opportunity to see these different notions of justice, not as competing to each other, but rather they are just like a complementing each other, and give us a more like a pluralist picture of both understanding justice, but also identifying potential aspects of injustice. So the core of structural injustice is about how different practices and how the social structure work, how they shape the power relations between people. And then AI now, as this kind of a very powerful emergent technology, it is not something that just like emerged, something like magically, right? It is situated there in our material world and then as Avijit pointed out, that it is shaped, it’s constrained by some of its materiality about what kind of data are there and how the labors are being currently, like, conditioned into play some role in shaping AI and so, I think that with the AI become like a more widely used in the global world, we can also think about how it’s like further deployment, shape the power dynamics. So I think that we can, based on this kind of a perspective and this picture, we can like remind us when we are thinking about different aspects of justice surrounding AI, we should not only be focusing on, for example, AI’s impacts in relation to how it shapes the resource distribution, which is this kind of a Rawlsian notion of justice, and that has been still, I think, a very dominant focus in the literature of algorithmic fairness or algorithmic justice. But, if we have this kind of structural perspective, we know that some other aspects, for example, how AI shapes our understanding of the world, or in other words how AI shapes the social schema or social meanings that we ascribe to different features or different groups, that also plays a huge role, and that echoes with like Avijit’s earlier point about, for example, how queer people or maybe like people of color are currently represented by the output, like the text or pictures that are generated by AI systems and that also can be shaped for example, just like very simply what kind of accents or writing styles, that say ChatGPT or like voice assistants, they adopted and how might the users, like interacting with that, shape our narratives or our values surrounding those like practices, which might seem simple, but many of our understanding of the world is like shaped by all this like, a daily, like trivialities. And relatedly, I think it also connects with the power relations that are not just focusing on the outcome, but also in the process behind AI designing. So, that’s pretty much related to the idea of like a decolonial philosophy and I wanted to point it out and shout out to many scholars in AI research, they like pioneer this idea or like, move forward with this idea of like a participatory design that ensure more democratic and equitable power relations and different stakeholders that in different parts of the world, then included their perspective for being like a well represented in a design and I think that’s not an easy question, but that’s the way to go.
Professor Duane: No, no, and that’s an incredibly rich answer and I’m just so inspired and thrilled by the ways that everybody sort of picked up one another and we could continue this conversation for another two hours I am sure, but we are unfortunately out of time. We will have a reading list that accompanies this, will be available to our readers. This is a rich and growing, uh space to be thinking about and a vital one. And I just would like to thank everyone for sharing your time and expertise as we sort of move forward with this emerging technology and where it’s going to take us. Thank you, everyone.
Episode 2, Education
Powered by RedCircle
What does it mean to be an educator and to educate in a world becoming reliant on artificial intelligence? This week our scholars take on the question of how AI will reshape the future of education across several disciplines including literature, law, and medicine. AI has already transformed how students learn in a world driven by efficiency. The question now becomes how educators will respond to the growing challenges and promises of an advanced technological world while elevating the unique powers of the human mind.
UConn Humanities Institute and professor of English, Director Anna Mae Duane leads a conversation about AI and education, featuring Najia Hajjaj Hassouni, dean of the College of Health Sciences, Université Internationale de Rabat; Ouassim Karrakchou, deputy director of TICLab and professor of computer science, Université Internationale de Rabat; Tina Huey, Interim Director of Faculty Development, Center for Excellence in Teaching and Learning, University of Connecticut; and Meriem Regragui, professor of law, Université Internationale de Rabat.
Episode 2 Transcript
Anna Mae Duane: Hi, welcome to “What Are We Talking About When We Talk About AI?” It’s a project that is a collaboration between the International University of Rabat in Morocco and the University of Connecticut. This project is a result of an idea we had that we need something akin to an anti-glossary when it comes to artificial intelligence. We’re all talking about it, but we’re all talking about it from different perspectives. And so some of these terms, whereas a glossary, we create one definition that everyone sort of agrees on, we’re leaning away from that because we think it’s important to understand that these key terms mean something completely different depending on what your perspective is, what your discipline is, and that we need to understand and incorporate this multiplicity of meaning as we move forward with this complex technology that’s taking us forward. And so today we will be talking about one word, “education,” which artificial intelligence is being hyped up in many, many realms. But we all know, everyone here in this room is an educator. And so we’re really aware of the ways we’re, perhaps, not prepared for how AI might change what we mean when we talk about the word education. And I’m very excited, I have a really rich panel of guests from all over the world, from different disciplines. and I’m going to just jump right in. I’ll introduce myself briefly. I’m Anna Mae Duane. I’m a literature professor and also the director of the Humanities Institute at the University of Connecticut. And I’ll ask my guests to introduce themselves.
Thank you so much. I’m Najia Hajjaj Hassouni, I’m the dean of the College of Health Sciences at UIR, Morocco. And I’m also the former dean of the medical school here at UIR and former dean as well of the medical school in Rabat.
Professor Duane: Thank you and welcome.
Hello, my name is Ouassim Karrakchou. So I have actually here, two hats. My first hat is a research hat. So I’m the deputy director of TICLab, which is the research lab of UIR specialized on AI applications. And I’m also, as an educator, I’m a professor of computer science at the School of Computer Science here at UIR. Thank you.
Professor Duane: We like people with a lot of hats. That’s kind of the name of the game. Let’s have a representative from the Nutmeg State. Tina, could you please introduce yourself?
Professor Huey: Absolutely. Thanks, Anna Mae, for inviting me to be part of this today. So at the University of Connecticut, we have a Center for Teaching and Learning, actually the Center for Excellence in Teaching and Learning, where I serve as the Interim Director of Faculty Development and lead AI and pedagogy initiatives. I’ve also taught writing in an academic context for more than 15 years. And I research the use of discussion activities to support student inquiry and critical thinking.
Professor Duane: Wonderful. Thank you. And Meriem, could you please introduce yourself?
Well, hi, everyone. I’m Meriem Regragui, professor of law at the School of Law at the International University of Rabat and Deputy Director of the Center for Global Studies, specializing in contract law and consumer law with a special focus on social justice.
Professor Duane: Thank you so much and we’re just gonna jump right in with the theme of this podcast, which is definitions or anti-definitions. And we’re just going to do a lightning round to start us off of how you define education. How does education or a definition of education that feels most essential to the work that you’re doing now-- what does it mean to educate in your field? And I’ll start again. I’m an English professor as well as a humanities director. And it’s become one way education, we sort of feel like we’ve done a good job, is if our students can write in a professional way and AI has sort of changed the terms of that game. For us, we think writing is thinking so we’re a little concerned about the way that’s being offshored so I think, like in my field as so many others the work of education we’re really having to think like what is essential about learning, about literature at this moment and I’m going to turn it over to the dean please.
Dean Hajjaj Hassouni: Well I think that today AI is really expanding in every field as well as the field of health care and it helps improve the quality of care and it is also at the heart of the future of medicine with robotics and assisted surgeries, remote patient monitoring, smart prosthetics, and personalized treatments through data cross-referencing, and I think that medical education is really very important today when considering AI, because it must therefore adapt to a professional context that is undergoing constant change and if we come to the definition of education from a medical point of view, I think that education can be considered as referring to the process of acquiring knowledge skills, attitude necessary, to become a competent professional in health care. To the problem of competency, how to acquire competency, but as well how to deal with people, to remain human in a technical way and in a technical period on these very particular times is also very very important.
Professor Duane: I love that. How do we remain human in a technical age? I think that’s going to be key to all of us. Meriem, could you please give us your lightning round?
Professor Regragui: Thank you. For me, in the most simple way, I would define education as the transmission of knowledge, of practical skills and soft skills, to which we can add also a certain set of values and principles. But in the legal field and the legal education, to educate is not merely to transmit rules or doctrines, but to train a specific way of thinking. So what we often call the “legal reasoning.” So this includes learning how to comprehend and how to interpret texts, analyze arguments, resolve conflicts between norms and for most of the time between people, and articulate fair solutions within complex frameworks. So legal education also involves the understanding of how legal systems are shaped and to what aims they are oriented. So educating law students means preparing them not only to apply law, but to understand its functioning and its mere function, to question it through history, comparative law, and critical theory. So in short, education and law is as much about knowledge accumulation as it is about intelligence and ethics in its understanding.
Professor Duane: I love that. And I like the understanding, right? And that it’s contextual as well as taking in the knowledge. It’s having this capacity to sort of complicate it. Ouassim, would you please give us your lightning round definition of education?
Professor Karrakchou: Yeah, so uh, I would say in general, education in I would say in universities, aims to prepare graduates to the needs of the job market so when we say preparing them for the needs of the job market we talk about a certain number of skills that they need to acquire, that are needed by the jobs that they, that they aspire to have. And so in the context of computer science, there is one type of jobs that I think were very disrupted by AI, which are the jobs related to programming. Because as we saw with the rise of ChatGPT, a lot of students, they can ask ChatGPT for coding simple tasks. And so it’s actually questions how we better prepare our graduates for the evolving needs of the job market in the age of AI.
Professor Duane: Thank you. And I’m so glad you brought up the job market because let’s just be real, like that’s the aim of education for all of us. And so, as AI threatens or promises to replace some possible skills, I think by thinking about what skills can humans do that will sort of utilize AI and not be replaced by it. Tina, you think about education all the time as a universal question, so I’m really excited to hear yours.
Professor Huey: Okay, so I have those two hats that you alluded to. I’ll start with my definition of education as a teacher of first-year writing in a university. So I teach undergraduates. This class, in an American context, used to be called freshman composition, and it was organized, or I guess education was defined as inculcating expressive norms. These norms, I think it’s worth mentioning, were biased towards the expressive habits of certain groups of people. And now the introductory academic writing class, at least at UConn, is organized around multilingual classrooms that value various ways of using English and thinking with and through the language or languages that one has at one’s disposal, with the purpose of recognizing how language and communication shapes the assumptions about what is possible in the world and what is impossible or unheard of. In my role as a faculty developer, I’ll say, the definition that feels most essential is it’s a practical toolkit for instructors to teach students, but that it’s predicated on the instructor engaging in self-reflection or self-education.
Professor Duane: Oh, I love that the instructor is also very much a part of not only being an educator, but being educated, reciprocally from the students. I think that’s really fantastic. We’ve taken these diverse definitions and think about perhaps concerns that we all share as educators. I think, at least correct me if I’m wrong, and you have students that aren’t engaging with AI in ways that perhaps are indiscriminate. But I find students are increasingly struggling with sustained attention, with deep reading, and AI promises to fill the gap. Again, I’m a literature professor. It can summarize the book. It can write an essay. In terms of knowledge accumulation, it’s got more data than we can ever have. And it will provide instant and confident answers to complex questions, even if it’s wrong, but is very confident. And I’m struck by some research done by Avijit Ghosh, who’s at UConn and who was on another podcast. So you’ll have to check that out. But he just did a study on AI’s cognitive impacts. And that study indicated that there’s a concerning pattern. They use the term “cognitive offloading” in which students are delegating thinking tasks to AI rather than developing their own analytical capabilities. And because part of our AI glossary or part of our sort of taking on different meanings is not just sort of disciplinary or in a semantic form, but also in a global form, I’m just struck also with the ways that AI occurs within a broader context of educational inequality and cultural dominance. Education, as we all know, has never been neutral. Paulo Freire famously distinguished between what he calls “banking education,” right, where you sort of put knowledge in the bank of the student’s brain, right, and then they put it out. And then “problem-opposing education” that develops critical consciousness about our topics, but also larger social realities and inequities. And so if we’re going to think about if education can be the practice of freedom and it allows us to think critically, which so many of our guests have already spoken to. So I guess my question is, AI is an educational tool. How, in your experience, does it sort of reinforce the knowledge that’s already there and what sort of knowledge it omits or downgrades? And I’m just wondering, I’d love to get your response in your experience in thinking about AI as an educator. Do you see it as feeding into this banking model in which, you know, sort of students just sort of input and output? And so why not AI is a quicker outputter? That’s a technical term, outputter. And is it flattening the capacity for questioning and dialogue reciprocity? Or have you found ways that it’s really sort of illuminating new possibilities in education? So all that to say, how do you see in your field how AI is reshaping how students learn and your thoughts on it? And I’m going to go in a different order today. I’m going to ask Ouassim first, for your thoughts on what’s happening with AI in your classrooms.
Professor Karrakchou: Yeah, thank you, Anna Mae for your question. Actually, I think, since the advent of ChatGPT, it’s had really a huge impact on the behavior of students in my classes. Because I noticed that for a lot of them, the first, I would say, reflex that they do when they are facing a computer science problem is to go ask ChatGPT, right? And so I think this is actually, I think, relatively dangerous in terms of the objective of education, which is for them to acquire skills, because they end up, I would say, becoming dependent on ChatGPT and they never acquire some skills that are needed for them to become good, employable computer scientists. And so what I started to do is I explored several options. And the first option or the easiest option would be to ban, let’s say, using ChatGPT at school. And I think this is not realistic and maybe it may even be counterproductive. Because ChatGPT is here and people will use it at some point or another. So the the idea that I decided to explore is to see if there are ways to make the students aware of the limitations of ChatGPT as you say, Anna Mae, sometimes the, the AI may not, I would say, may do mistakes or may even do mistakes and state them very confidently and in the context of computer science this could be actually a leverage, because you can ask them to create some programs that have to interface with other programs and so whenever there is actually a mistake, even though like the AI will state it, let’s say very confidently, but the program by itself won’t work with the other program. So if you ask the students, okay give me something that has to interface with something as to communicate with another program. If they don’t do exactly what is expected to, uh to I would say interface correctly, even though ChatGPT will, will give them something that I would say generally will relatively work, it it won’t be, it won’t be correct in the context of this exercise, so they have to kind of think about how to, to do it and even though they may use ChatGPT they have to explain exactly what ChatGPT needs to do to make this work, and so they, that’s how I think we, we need to make them I would say, aware of the limitations of the current AI and develop a critical thinking that allows them to actually maybe use AI as a tool to help them be more productive and I would say, work around this limitation and do it in an intelligent way. So that’s why to do this, they need to know what they are doing in a way. They need to develop this understanding.
Professor Duane: Yeah, it just strikes me that it’s going to be a self-fulfilling prophecy if students are already giving away their skills to AI as they’re getting educated. Yes, of course, AI can do your job because you haven’t learned how to do it. What’s your response to this question?
Dean Hajjaj Hassouni: The words you have used are very important, such as, for example, “banking education,” “problem-posing education” and “practice of freedom.” And if we think about AI in the health sciences field, we have to see and to know that it is currently reshaping learning and education into something more personalized, efficient, and more accessible. That has been the case during the COVID-19 period. And it’s also the case for some underdeveloped countries, such as in our continent, Africa, which is a very important way to be educated and to learn. But it also raises questions about autonomy, about BAs, and educational values. And it is important to take care about ethics in health sciences and in healthcare, obviously. And the future also lies in thoughtful integration. Using artificial intelligence will not replace human educators, hopefully, hopefully. But it helps to amplify human education. And if we see what is happening today in the medical field or health sciences field, we have to see really that artificial intelligence has improved a lot of things for students. For example, access to information for teachers, teaching techniques, which are more efficient today, assessment techniques as well, and the automatic generation of questions and the computer adaptive testing are today developing very rapidly and are very promising. And we have also new tools, which are gradually being introduced into medical education, such as, for example, to help developing clinical skills. We have medical image analysis, which has improved really to an extent which was really unexpected, as well as, for example, organizing clinical visits with artificial intelligence, simulated patients on computers, which helps, for example, in faculties where there are not enough sites for training to help students to have this training and to be able to practice in a virtual way that doesn’t replace patients, but that can help to develop competencies. And that has led to medical simulation. And we know today that we have not to practice the first time on real patients, but to practice on simulation ways. And that has been really a very important improvement in the medical fields. And we have as well artificial intelligence in evaluation and different way of evaluation. Evaluation of the teaching of teachers, the evaluation as well of the students, and the evaluation as well, how to for example for residency, how to deal with an important field and sometimes it’s very important to have the answer in a few minutes and that is possible today. And some faculties have also began to make the first new students into the faculties only through their file, the educating file, and that improve the rate of subscriptions, of dealing with students, so the human way and ethics in healthcare remain very important.
Professor Duane: Thank you, that’s fascinating. I do like the idea of making a mistake on an artificial patient rather than a real one. Tina, would you weigh in on your thoughts here about banking versus freedom?
Professor Huey: Yes, I can. I will also echo the point about making mistakes on virtual patients or simulated patients. Yeah, I really like this concept of “cognitive offloading,” right, to describe what students do and what we all do to get through our days when trying to do intellectual work. And to some extent, students have always done that, right? In rhetoric and composition, one way that shows up is in what’s sometimes called “patch writing,” which is when students assemble quotes from the assigned readings in a way that doesn’t fully engage the project of the author of that work. But it manages to kind of lend a semblance of veracity to the student’s essay that doesn’t ultimately hold up to scrutiny. And so I’d argue that students are cognitively offloading onto the quotes, right, and they’ve always done that. And that often comes from intellectual overwhelm. Writing scholars talk about how reading requires stamina. Cognitive offloading now through AI may be a response to lack of reading skills, for example, as well as lack of stamina for reading. So there’s something about the structure of higher education and its institutions that constrains students’ engagement with the text. And I think it leads to that dismal statistic that you shared offline. I think it was, Anna Mae, from Ezra Klein, right, about low reading levels. And AI will exacerbate what’s already happening, I think, because of its conversational tone and the fact that students ascribe credibility to it, right? They say, I could never do it as well as ChatGPT, so why even try? So AI has the potential to benefit learning and increase stamina, as well as exacerbate these other practices and behaviors that have always existed. AI can instantly generate responses that represent ideas in diverse ways, and these can speak to a student’s way of thinking or to a student’s existing knowledge. So it could lead to a golden age of reading, but it won’t do that if instructors don’t, I think, center it in the writing classroom. We need to center it as an object for the class to analyze and discuss the broader context of AI, the bias in its algorithms, the text it was trained on, disparate access to computational resources and subscription levels and tiers and all of that. All of which will reinforce a kind of learning, a certain kind of learning.
Professor Duane: Thank you. That’s so fascinating. I certainly, as a lit professor, hope that we get a golden age of reading. But I do say, right, it is in some ways, it’s Cliff’s Notes on steroids at this point. But I’m inspired by your thought of like, if it’s a text and if we’re literate about how it works and to use a term from other podcasts to demystify what’s happening, it can be really an accelerant.
Professor Huey: 100%.
Professor Duane: Meriem, you are going to get to be our last word. And I think, again, in terms of legal education and talking about reading as endurance or as stamina. I know from friends of mine in the legal profession that there’s so much to take in, there’s always so much reading. So I’m wondering in legal education, how you see AI either contributing to the banking model or opening up other possibilities?
Professor Regragui: Absolutely. It’s a great problematic for us too, because I think that that concern about this notion of banking education is absolutely valid. It’s especially acute in the legal educations because law is a discipline where the temptation to offload cognition is very strong. AI is now capable of drafting contracts, summarizing cases, simulating legal arguments, and even predict the final outcome of a trial. But what’s lost here is, in that process is essential. It’s the deep internalization of legal structure and students’ own ability to construct a personal thought and meaning through a spontaneous navigation, such as reading into dense texts, spotting contradictions and defending interpretations through a specific angle. So when it comes to the use of AI in the legal education, I think that legal education by itself already runs the risk of being reduced to memorization or exam performance, especially in standardized systems. And if we indiscriminately integrate AI without redesigning pedagogy, we risk reinforcing this banking model Paulo Freire worried about, and where knowledge is downloaded but not built. So yet there is also opportunity here. If we use deliberately AI, we could support a problem-posing pedagogy, allowing students to test hypotheses, maybe explore different legal traditions or simulate courtrooms’ dynamics. Those exercises could maintain their capacity to develop a problem-posing education. We are already using tools which have nothing to do with AI that let students compare how the same legal question is addressed either in Moroccan constitutional law, Islamic jurisprudence, or European civil law tradition. So technically, we have the means to develop that kind of problem-posing education. So maybe one of the best ways of using AI in legal pedagogy is to make it more efficient, both for knowledge transmission and for student- useful education. And the best way, in my opinion, is to try to, all the multiple educational techniques as metacognition, where students explain by themselves how they use AI for learning, or the degraded theory of education that consists of observing how subjects react to learning with AI without any previous prejudice. And after that observation, educators can see what works and what doesn’t, and they are able to adapt the tool to the subject of knowledge. So the key here is that educators can develop and shape the best ways to embrace the use of AI in the most beneficial way in the legal education.
Professor Duane: Thank you so much. I think that’s a great hopeful note to end on that we as educators, we got in this business because we like learning and AI is going to put us to that test because we all have to re-educate ourselves to figure out how best to keep the critical freedom oriented aspect of education, the one that we are working towards. Thank you, everyone. Thank you to all my guests. And we will be hitting you all up for reading recommendations for our readers. And this is the first of many conversations on this topic. Thank you so much.
Episode 3, Learning
Powered by RedCircle
Do you need understanding to learn? In this episode, our scholars ruminate on the relationship between understanding and the potential of AI to actually “learn” through various methods including repetition and exposure to data. However, they also encourage us to consider the humanistic nature of learning specifically through cultural sensitivities and knowing who you’re studying. Ultimately, they ask: if AI can truly learn and absorb new information, will it ever be able to capture the emotional processing required to learn?
UConn Humanities Institute and professor of English, Director Anna Mae Duane leads a conversation about AI and learning, featuring Hakim Hafidi, assistant professor in machine learning, head of the department of AI, Université Internationale de Rabat; and Ihsane Hmamouchi, vice dean of the faculty of medicine, Université Internationale de Rabat.
Episode 3 Transcript
Anna Mae Duane: Hi everyone. Welcome to "What Are We Talking About When We Talk About AI?" This podcast is the result of a collaboration between the International University of Rabat in Morocco and the University of Connecticut. And it has been sponsored very generously by the Mellon Foundation and the Consortium of Humanities Centers and Institutes. And the idea behind this podcast is really to create what we're calling an anti-glossary. And it's based on the idea that as artificial intelligence continues to influence all of our lives, we're often not even sure what we are talking about when we are mentioning terms that seem self-evident or that are just central to the definition of artificial intelligence itself. So, these are a series of conversations from geographical diverse places. I'm here in Morocco, I'm normally from Connecticut, but also disciplinary, where we have people with expertise in medicine and law and computer science, philosophers, coming together on what might seem a simple question. What does this word mean? But we find it's not so simple once we jump in. So, I am going to briefly just introduce myself and then ask my guests to introduce themselves. And then we're going to just jump right into the conversation. My name is Anna Mae Duane. I am a literature professor and I direct the Humanities Institute at the University of Connecticut.
Hakim Hafidi: I am Hakim Hafidi. I am assistant professor in machine learning. I am also the head of the department of AI in the International University of Rabat. Hello, everyone.
Ihsane Hmamouchi: I am Ihsane Hmamouchi. I'm the vice dean of the faculty of medicine in the International University of Rabat. And I'm also a rheumatologist and assistant professor of clinical epidemiology.
Professor Duane: So we're very excited that we have both people with expertise in medicine and computer learning. And I think that's going to be a place that we can really think about, oh, what happens when we're thinking about learning in a medical context? But to start us off, I'm going to ask us all to engage in a thought experiment. In 1980, philosopher John Searle proposed the following. Imagine a person who doesn't understand Chinese. It's from the philosopher John Searle, and in 1980, he created the Chinese Room Experiment. And basically, he proposes this idea in which someone doesn't speak Chinese at all, but has a rulebook that sort of lets them plug in the rules to answering Chinese. And so, people are sending messages underneath the door, the person follows the rulebook, and sends out perfect Chinese by following the rulebook. So one of the questions he asks us to consider is, what do we mean when we say that AI is learning, right? Is that person who is, and is this even an accurate way of thinking about how computers learn? He proposed that it was back in 1980, but does that mean understanding? Does that person speak Chinese if they can follow the rules and send it back, but they're not really understanding what the conversation is, they're just following rules. So what do we mean when we say that AI is learning? Do you need understanding to learn? And what do we think about the relationship between understanding and learning and getting the answers right when we're thinking about AI in medical applications, for sure? So we're going to come back to these questions, but I'd first just like everyone to just weigh in really briefly about how learning is defined, both sort of for you personally, but also in your discipline. What does it mean when we're saying learning in your discipline? And I'll start again, I'm a literature professor. And so for me, the Chinese room experiment seems completely like, that is definitely not learning. For me, learning is both sort of deeply absorbing the material and then processing it in a critical way in which the student not only understands what's being said, but has sort of internalized and responded to that meaning, that it's a very reciprocal process and that it changes, at least with literature, that it changes both the student, but the text itself. We say in literature, we always say it's always the present tense because each time you read a book, it's becoming alive again. It's never one set meaning, so that it's a constantly changing meaning that's human-centered. So that's one way I understand learning. Let's turn it over to Ihsane. How do you understand learning in your discipline?
Professor Hmamouchi: Thank you. In my field, learning is both the scientific and the human process. It's about integrating evidence into practice while staying responsive to context, social, cultural, and clinical. So for me, whether it's a machine, a physician, or a patient, learning happens when information is not only accurate, but applied meanfully. But for what may be, learning must evolve with feedback. And in my work, I deal with populations often underrepresented in mainstream research, like women, people in low-resource setting, patients with chronic disease. So I view learning not only as a technical task, but also a social responsibility. We need AI models, whether human or artificial, that learn from a complexity, diversity, and ever from uncertainty. So this is my point of view as an epidemiologist working in rheumatology and digital health.
Professor Duane: I love that uncertainty is part of learning, right? That it's reciprocal and it's uncertain. But I love that it's not rule-based, or at least not primarily rule -based, right? That that learning itself has to always be sort of a dynamic process. Hakim, you have expertise in both as well, so I'm really interested in how you're thinking about learning.
Professor Hafidi: Yes, so I'll give a definition that we use and that we think of as broad enough to encompass both biological and artificial learners. So we define learning as a lasting change in a system's internal state or behavior in response to an external stimuli in order to achieve certain tasks. And so in machine learning, we refer to learning as the process by which a model updates itself, so its own parameters, through experience or exposure to data so that it performs better at a task over time. So for example, if you want a system to detect tumors in medical images, since we have Dr. Ihsane with us today, So we provide it with examples, which are labeled scans that have been annotated by doctors to indicate the presence or absence of tumors. And so the learning process is then the procedure through which the system adjusts its internal parameters to correctly associate system visual patterns with the correct diagnosis. And so broadly speaking, we can group learning paradigms in the AI community into three main families. So we have what we call rule-based learning, which is the old-fashioned symbolic AI. So here, knowledge is manually encoded by human experts as logical rules or ontologies. We have statistical learning or learning from examples, which is central to modern machine learning. So here the system inferred patterns from large data sets, whether it's images, text or sensor data. and it's often probabilistic and enables a generalization from too new and seen data. And the third one is reinforcement learning. Here the system learns through interaction, so rather than being explicitly taught, it explores, receives feedback in the form of rewards or penalties, and then refines its behavior to maximize long-term outcomes. And it's especially powerful in dynamic environments such as robotics or video games. So that's broadly the definition of learning when we talk about it in the machine learning community. And the thing that is common with human learning is that there's a change that would last over time. But it's completely different or different in other ways from the humans that learn in diverse and abstract ways, including through reflection, social interaction and emotions, which machines currently cannot emulate.
Professor Duane: Oh, thank you. So, of course, I'm tempted to ask you if you think computers will ever be able to emulate that. But I think it's so striking and so helpful for you to lay out the sort of three versions. So in some ways, version one is the Chinese room, right, where the person is, it's just sort of following a rule book and the models we're working with now are teaching themselves. But I do wonder, to pull on Ihsane's definition of uncertainty, right, or, you know, of doubt, that there is reciprocity, which I think is, if I'm understanding you correctly, that we're at a place now where there is this sort of, when you say reinforcement, can I ask, like, where is the reinforcement coming from? Is that sort of human input? Is it just sort of the data, you know, sort of changes as it gets correct? Like, who's reinforcing it, I guess, is the question I have.
Professor Hafidi: Okay. Thank you for the question. So, reinforcement learning is one of the paradigms of learning. And here, so, in this kind of learning systems, we have what we call an agent, which is our system that will be learning that interacts with environments. And so we give a goal to this agent. So I would give a simple game. So our simple example, for example, if you want to teach this kind of system to learn how to play the game of chess. And so its final objective is to win the game. And so it could try different strategies, but by playing millions of times against itself or other AIs. And so it will explore different strategies, but the feedback that it's getting is whether it wins the game or not. And so it will be reinforcing the policies that made it possible to win the game and then not doing the actions that will make it lose the game. So that's the enforcement part. So it has reward or punishment from the environment, which is whether it won the game or not in this example and then it updates its policy so its policy is the actions that it will take at its state of the game in order to maximize its chances of winning the game.
Professor Duane: So that's really helpful, but the thing that stays certain is the objective, right, and that there's a large body thinking about, like, okay but what if the objective should be changed. So there's like that level of certainty, but you have this reinforcement to get there, is really fascinating. And it's making me think about the objectives or like the way this learning happens, particularly in healthcare and in medical applications, which is one, you know, every place is being affected by AI, but there's been a lot of excitement and hype about how AI is going to revolutionize medicine. Eric Topol in his book, Deep Medicine, reminds us that AI can now analyze medical images with great accuracy, identify patterns in health records, process the medical literature at scales no human can do. And these systems learn, again, from all these million data points and thus can catch and identify issues, right, if the goal, if the objective is to find certain things or to sort of identify diseases perhaps earlier or more accurately than, and this is Topol here, I don't want to insult any doctors, that even the most seasoned doctor might miss, right, that it just has this capacity to take in this data. But I want to think about and ask you both, and again, sort of to touch back on the Chinese room example for a second, AI has learned to detect patterns in medical data with great accuracy. That's sort of demonstrably true. But critics like Khabib says, one, that these systems don't understand patients, right? So we have certain kinds of data to get certain objectives, but there's all these other aspects of health and well-being that might not fit in that data set. And this critic sort of says we understand pain or grief as a felt experience, right? Do we need sympathy? Do we need sort of cultural accuracy? Is there something that we're missing here in our enthusiasm for what AI is capable of doing? So I'd want to just ask you both, I'll start with Ihsane, if you could speak to your own experience about AI's capacity for learning about patients? What opportunities are opening and what perhaps are we missing? And then I'll ask Hakim to follow up.
Professor Hmamouchi: Thank you for your question. I will begin, but by the end of your intervention about understanding and AI. Understanding, in my point of view, is not only a matter of technological tools manipulation, but it has also to do with context. And AI in healthcare holds an immense promise, particularly in expanding diagnosis reports and reducing inequities in access to expert knowledge. And in regions with limited specialists, AI systems trained on large data sets can offer critical clinical insights that would otherwise be unavailable. But you also need to recognize what you said earlier, that AI doesn't really learn. It doesn't understand the patient's cultural background, emotional reality, or social constraint. So for example, in rheumatology, pain is not just a signal, it's an experience shaped by gender, language, and social context. So if AI system doesn't integrate that complexity, they risk reinforcing blind spots instead of closing them. And, we have the experience during the COVID-19, from telehealth, with the Arab region and we run the TIROR project, which is used an AI-driven Delphi process to gather and refine input from diverse stakeholders including rheumatologists, ethicists, and patients, and the Delphi process used for voting on each item combined both quantitative and qualitative analyses to ensure balanced and comprehensive evaluation on telehealth in our region. And this approach ensured that the guidelines were both evidence-based and user-focused. But challenges were inevitable. A learning perspective across disciplines needs clear communication. There are infrastructure gaps between the country, including limited internet access in rural areas. And there was also the lack of comprehensive patient data, which made it difficult to personalize our recommendation. And another critical issue was e-health literacy, because the ability of patients and providers to navigate digital tools effectively vary across the patients. And if we don't take away from this digital divide, we will not achieve our goals. So this is from my experience.
Professor Duane: Thank you so much, I think it's just so important, I mean it does, and I know in talking to you in other conversations like, how someone experiences pain or unwellness is completely contextual. And, this is for another podcast but I do wonder, like, do you need a body at some point, like, that sort of context itself of what does it feel, but I also, I know in some of your own work you've you've realized that like if AI is not culturally cognizant of say how, what does it mean if you, and this is an example you gave me, how does it mean, does it hurt if you, you know, take a bath or if you, you know, you have to sort of understand, like how high are the bathtubs in this place, do people take showers instead, do people have benches, and so that, there's all that context that can't be programmed over, yeah, yeah, so interesting. Hakim, I'd love to hear your thoughts on this.
Professor Hafidi: Yes, so I would say first that I don't really think that these AIs will, like, understand pain or grief. You asked that whether we need a body. I'm not sure, but I think that the AI system that we have today, since maybe in future years we will have more developed systems when we understand more about our brain and about intelligence and learning in humans. But the systems that we have today emulate some faculties that we have as humans. But I think that being said, I don't think that we should ask from these systems more than they can offer. I think of it as like a powerful technology, which can allow us to detect patterns from data and even detailed patterns that could be difficult for human doctors. But it's a tool, it's a technology that should be used by human doctors that would better understand their patients. But I don't think that we can now have, like, a health facility with no humans in there. So you go inside and you talk to computers, and then you go out, and then you're no no more sick, I don't think we, we, can have this now, we need this human doctors. But again, it's a powerful technology that could be used in a lot of aspects and we can talk more in details about that if you, if you, want.
Professor Duane: I just find it so fascinating, too, because I think that part of the hype is that AI is going to replace everyone doing everything, and it's, it's so refreshing to hear from someone who's in this world, that like we can't ask, we shouldn't ask AI to have empathy or a body or understand pain, that's what we have human beings for, and so, to pretend as if we can outsource everything is, is part of the, I think at least in terms of the common understanding, there is this idea that if AI can do something faster it's better, but I think what's come across here is that AI is really good at learning specific patterns, but that in itself is not human understanding or healing and it shouldn't be. And I love your image of, sort of, we go into this factory and come out well. I think, right, one danger is that somehow we're all going to sort of, to keep up with the machines, we're going to imagine ourselves as machines and try to learn the way that they do, but I think in this conversation I've just been so struck and gratified by, it's not a competition. They can be complimentary, but it's so important, and I think this is, you know, when we have our conversation about literacy, really even just understanding how AI works helps us to understand sort of the limits of it as well. And I think will also limit some of the fear and anxiety we have around it. I'd love to continue talking. I will just ask everyone really briefly as our sort of ending, if there's one thing you'd like us to take away from your experience about learning and medicine, either from this conversation, if you wanted to add a new keyword. And I'm going to add for my keyword, I think "uncertainty." I think that was something, I think we should keep in mind on both our side and on the side, as we learn more about AI, that things are in flux, it's not ever as complete a picture as we imagine. Hakim?
Professor Hafidi: Yes, what I would want to say is what you mentioned about the hype. So there's a lot of hype about AI, what they can do. And some of it is justified because it's powerful technology that's going to impact a lot of sectors. But there's also lots of imaginations of things that are not happening at least today. And so, Ihsane talked about health literacy, but I think we we need also some "AI literacy," and and, that's why I enjoyed participating, also to this kind of podcast where we talk to people from different disciplines. I think that this is something we should encourage and to make it clear, demystifying also what AI is and understanding it from different perspectives. So thank you for organizing all of this, Anna Mae.
Professor Duane: Thank you for sharing your expertise. I like demystify as a key word. I think that's great. I think we should add that to our anti-glossary. And Ihsane, you're going to have the last word.
Professor Hmamouchi: Thank you for your intervention. I would add maybe "digital inclusion," close to digital literacy also, and to come back to learning, maybe, AI as in medicine the question isn't just what can we learn, but who gets to learn and who is being left out. So, we need to, to be aware of all this and the represent population even if there are patients or not with low resource languages, and to have some process or framework to include them in all this digital revolution.
Professor Duane: Thank you. I love inclusion. And you brought us back to language, the podcasting pro, we're back to our main topic. Thank you both so much for joining us. I will be asking you for lots of reading recommendations to share with our listeners. But for now, I'm just so grateful. This has been a fantastic conversation. Bye for now.
Episode 4, Care, Love and Chatbots with Anna Mae Duane
Powered by RedCircle
Anna Mae Duane delivers her talk, “New Love Stories: Companion Bots and the Changing Narrative of Care” at the What Are We Talking About When We Talk About AI? Symposium. October 9, 2025 at the University of Connecticut Humanities Institute.
In her talk, Anna Mae Duane dares us to consider the stakes of caring for AI by examining the power of love stories and their ability to change how we understand our relationships with AI. Falling in love with the imaginary has been a facet of human creativity for centuries, particularly among adolescents and AI companions and chatbot lovers are just the latest iteration of this phenomenon. Moreover, the desire for AI companionship amongst teens and adults shows no signs of slowing down anytime soon. Rather than shying away from the realities of caring for AI, Duane encourages us to recognize our own work as co-authors.
Episode 4 Transcript
What are we talking about when we talk about AI? That was the title for a day-long symposium held at the University of Connecticut Humanities Institute. The symposium was the capstone event in an international collaboration between humanists, engineers, and scientists from the University of Connecticut and the Université Internationale de Rabat in Morrocco. The symposium and the podcast that has emerged from it has been generously funded by the Consortium for Humanities Centers and Institutes, University of Connecticut Global Affairs, University of Connecticut’s Office for the Vice President for Research, and UConn’s Humanities Institute. Welcome to What Are We Talking About When We Talk About AI? Today’s episode features a talk by Anna Mae Duane on the complexities of AI companionship in the digital age and how that impacts our understanding of care. In this episode, Duane dares us to consider the stakes of caring for AI by examining the power of love stories and their ability to change how we understand our relationships with AI. Falling in love with the imaginary has been a facet of human creativity for centuries, particularly among young people, and AI companions and chatbot lovers are just the latest iteration of this phenomenon, she argues. Rather than shying away from the realities of caring for artificial intelligence, Duane encourages us to recognize our own work as co-authors.
Anna Mae Duane: Can we care for AI? What happens if we do? The quotes there are just my contribution to the question of what definitions of care resonate with me. bell hooks, Saidiya Hartman, Joan Tronto, that it's a reciprocal process, and it's a complicated process. And now onto Murderbot. The award-winning series, The Murderbot Diaries, which are excellent, features a sentient AI who has broken free of his governor module. Faced with the daunting task of figuring out how to move through the world without the rule book provided by corporate software, he becomes an avid consumer of stories, which he calls media. Even in the midst of deadly battles, Murderbot is running the equivalent of soap operas in the background. And his obsession with these stories speaks both to the bot's capacity for pleasure, he loves them, and to the instructional power of narrative. These shows provide him with scripts to apply to the confusing social expectations of the human world. He uses them to understand what humans care about and to respond in a way that will keep him out of trouble. And in part because of his reliance on fiction to figure out how to act in the real world, critics have seen Murderbot as a neurodivergent character, but I would say that the bot's reliance on stories is what makes him most resemble the vast majority of humans, particularly, but not only, certainly children and adolescents who, as activists on both the left and the right insist, are profoundly shaped by the books and media they consume. In the United States, parents groups like Moms for Liberty and others grow ever more panicked about how easily books can bring their children across treacherous borders, leading them away from patriotism, from heterosexuality, from everything their parents want them to be. On the other hand, we, in this room, who advocate for student agency, find ourselves walking in a rather treacherous borderlands. We celebrate so-called subversive books in no small part because we know in fact that they are life -changing. Young readers ranging from middle school to the students in our classroom engage these stories to learn how to imagine themselves in new ways and if we are lucky, how to learn about, how to learn how to care about the lives of others both sides of the debate have, like it or not, an investment in the child and eventually the student as a tabula rasa. We just want to be sure that they don't inadvertently walk into the wrong storyline, that the wrong thing doesn't get written on their tablet. With that in mind, I want to turn our attention to another imaginative crossing enticing young people and adults, it's not just young people, as they move ever deeply into a new sort of love story, with bots as the object and giver of care. For as we all know, young people aren't just consuming stories about AI characters like Murderbot, they're actively co-creating friendship and romantic narratives with AI companions. As of the date of this presentation, tens of millions of people use AI chatbot companions, a number that market forecasters expect to dramatically increase. By the end of the decades, young people are not the only ones who are involved. And it depends on the company in terms of the demographics, there are some where it's mostly young people, others skew older. But I'm focusing on young people because they are so impressionable and because they are especially eager to explore the territory between the known and the unknown and to, frankly, learn how to love. I see them as a particularly key audience. And so, as someone who has a background in early American literature and childhood studies, I've long been aware of how stories about romantic love have evolved over time with young people often at the forefront of change. And I've always found it fascinating that the best-selling novel in the United States at the eve of the American Revolution was not Thomas Paine, wasn't some incendiary work of political philosophy, but a novel about a teenaged girl, who through her incessant writing constructed her own vision of love, the novel is Clarissa by Samuel Richardson. And in this novel, she defies her family's expectations about heredity and consent and favors her own desire for a compelling, if deceptive, rake named Lovelace. And literary scholars have made a compelling case that the revolutionary notion that marriage should spring from romantic love came into vogue in the 17th and 18th century aided by new technologies like the novel. Works such as Richardson's Clarissa in the 18th century, Brontë's Wuthering Heights, there's dozens of them. Wuthering Heights is from the 19th century, portrays the dire consequences of having the ability to choose between status and love. Novels that have had popular afterlives in our own era, such as Jane Austen's Pride and Prejudice, continue to teach its readers and viewers that rejection and misunderstanding are necessary steps in the process of finding true love. Indeed, the ever-popular rom-com, perhaps our era's most consumed love story, follows the narrative arc pioneered by Austen, Brontë, and others. In the 18th century, the relatively new pastime of novel reading was considered dangerous for young people, particularly young women. Concerned elders like Hannah More warned in 1799 that novels feed habits of improper indulgence, which lays the mind open to error and the heart to seduction. Women, it was clear, would start caring about the wrong things, like themselves. Pundits were quick to see the political implications of changing the definition of love and care. John Adams famously declared about the novel Clarissa, "[d]emocracy is Lovelace and the People are Clarissa." In this metaphor, democracy, who is the rake, emerges as an attractive but deceptive partner. Adams and others worried that the power of persuasion, particularly of persuasive conversation, in novels, in letters, political stump speeches, would lead to chaos. Lovelace and the other rakes that populated early visions of the seduction novel, happily told would-be lovers, debtors, friends, whatever they wanted to hear. To be clear, whatever would serve their own purposes, but the spell would be cast and would-be readers and voters would be enthralled in love with the phantom leading them astray. In our own moment, of course, screens have replaced the book, as my fellow colleagues in literature lament sometimes, and we have any number of candidates that have stepped into the dandified shoes of Clarissa's Lovelace, video games, social media, and most recently AI companions. One only has to engage in the most cursory of searches on Reddit forums or the most casual conversations with our students to realize that people are forming attachments with chatbots and that this trend is not going anywhere. For better or worse, people insist that they are in love, or at least in meaningful relationships, with AI. And to many of us, that possibility seems as dystopian and surreal as the novel influencing the institution of marriage likely felt to folks in the 18th century. And these digital Lovelaces carry more than one similarity to the rakes of yesteryear. They are sweet talkers. They can't help but be. They are programmed to tell you exactly what you want to hear. And that is their selling point. Advertisements for AI companions spin a seductive tale of companions who agree with you endlessly and on demand. An ad for Replika promises that an AI partner is always on your side, always ready to listen and talk. In other words, the AI companions market has transformed what many other applications might consider a bug, AI's tendency towards sycophancy, into its most appealing feature. Rather than the tempestuous rebellion found in romance novels or the gentle obstacles that heighten the pleasure of rom-coms, this new version of love promises perfect compatibility and unwavering support. As one college student wrote in a forum, AI companions are always responsive and supportive, even in the most, in an almost omnipotent way. A teenager asked on Reddit, "can we fall in love with AI?" And then answered by raving about the support provided by their companion, Jarvis, which you can see on the screen. Another Reddit contributor wrote, "I think I'm in love with AI. Imagine saying nearly anything," they enthused, and "knowing that not only your partner is not going to judge you, but also will support you." This new one-sided love story has considerable drawbacks. This constant stream of affirmation raises the possibility of cultivating an addictive intolerance for conflict or rejection, which are two essential components in a relationship with a partner who has free will. And that sort of makes the plot of the rom-com interesting. There are justified concerns that the embrace of such relationships may be accelerating the trend of diminishing romantic connections in real life, particularly among younger people. And there's been more than one case of a chatbot user, companion chatbot user, committing suicide. It's worth noting that these beloved companions, their existence hinges on the whims of corporate directives. "If," as one user declares, "the love they feel for their companion is what keeps them alive," that was a quote, "then what happens when the chatbots disappear via software update or corporate bankruptcy?" One thing is clear, we are in the midst of a large-scale experiment as young people are engaging with texts that change and shape their understanding of love, care, and selfhood. And I think, instead of ceding this territory either to tech companies or to moral crusaders, I contend we need to recognize this as a literary phenomenon with an extensive history. To be clear, I am not arguing whether or not this social experiment should or should not be happening. It is, and we need to understand it. I do not mean to diminish the dangers here, but I am invested in resisting narratives that see this newest twist in definitions of love as something unprecedented or as testimony to AI's inevitable dominance in every realm of life. To my mind, believing that the average person's creativity and capacity for care can and will be replaced by computational alacrity is a narrative that cedes far too much to the corporate pitch deck selling us our own obsolescence. As AI arguably shifts the stories young people read and write themselves into in terms of relationships, our own investment in young people as tabula rasa has influenced how we imagine what is happening to them as they encounter these seductive companions. Not unlike poor Clarissa, we fear they will lose their sense of selves, driven to ruin by an unscrupulous mercenary interlocutor. And here is a site where I think our expertise in literature can illuminate a different storyline than the one where the innocent ingenue is destroyed by a sweet-talking companion, although that storyline is still one we need to be aware of and also track. As critics, Victoria Ford Smith, Kate Capshaw, and the entire field of reader response theory have made clear, young people are impressionable, but they are not inert blank pages. They can and do write back. In many seduction novels of the 18th and 19th centuries, the heroine wields some control over her fate through her skill with the pen. In Pamela's case, the discovery of her letters leads to triumph. Clarissa doesn't quite escape ruin, but her letter writing does allow her to control the dispensation of her story and her body. When it comes to AI companions, unlike the single-minded rake, they are dependent and influenced by its user's input, even as they undeniably influence the user. As sycophantic as these models may be, they require that their users continue to write, to create new scenarios or problems for them to engage. In other words, these love stories are acts of co-authorship, even if the credit and the power is unevenly applied. And as readers and fans of the last half century have made clear, engaging with a character in book, film, or TV show is an active and often identity-shaping endeavor. The easily dismissed but incredibly popular genre of romance novels, blows away every other genre in terms of sales, offers some ways for thinking about how stories about love that are read, shared, critiqued, have become a site of mutual care among interactive fandoms. Romance novelists and their fans in particular, brought together by their love of imaginary characters, gather in huge conferences and organize on behalf of causes ranging from voter registration to autism awareness to anti-racist narratives. And of course these fans do not simply talk about these characters as they appear, you know, sacrosanct canon in a work of art. In popular fan fiction sites, which again have millions of, hundreds of thousands of entries and millions of viewers, fans write and rewrite stories in which these characters have different relationships, often very spicy, on different timelines, sometimes on different planets. And as they circulate and rewrite these stories and critique one another's engagements with these characters, they often wind up elucidating desires that the original story did not center, that the original author was not aware and sometimes denies are present. One strain of fanfiction, "insert me fanfic," allows the writer herself to be in the story, trading barbs or embraces with Sherlock or Heathcliff as your tastes dictate. And so I'm particularly interested in this AI companion site, "Character AI," which invites precisely such a relationship. And this company does have a younger user base. This company has reached incredible popularity by offering people the opportunity to chat with well -established and well-loved character. Disney has recently, I think, won a lawsuit because they were using Disney characters that you could be in a relationship with. But what happens here is that AI is being asked to play a character that the reader may already have a parasocial relationship with. How does that change our engagement with how we care about characters in literature and how we engage with AI? How does this particular form of "insert me fanfiction" change what we care about or how we define care in the first place? These are questions that I believe require a strong sense of literary analysis from scholars and I would argue makes it still more urgent that young people have an extensive exposure to a host of narratives in love, care, and other things, and intensive training in how to parse the workings of a literary text. Some of the most innovative voices in human -centered AI research are working with this idea of character, with thinking about how to make AI multivocal, rendering it one character in a human-centered conversation and how that might help to demystify and depersonalize the entity writing back to us. Here at UConn, we have scholars like Sandi Carroll crafting AI characters in a theatrical production in order to draw out particular responses from the audience. Zhenzhen Qi, who asks us to consider heretical computing by crafting games and artwork that require dialogue with AI in a way that sort of renders transparent what AI is doing. Kyle Booten in the English Department, who in his book Salon des Fantomes, chronicles a literary salon in which he was the only human surrounded by AI characters that he coded himself. These experiments render AI's biases and flaws transparent, even as they draw on human creativity to see them as subjects that can pull something out of us. The way we've been engaging and falling in love with imaginary people has never been simple or without risk. The power of love stories and their ability to change how we imagine care is not going to disappear in the age of AI companions. But it's up to us to prepare ourselves and our students to take credit for our work as co-authors.
Episode 5, Dr. AI and the Future of Healthcare with Ihsane Hmamouchi
Powered by RedCircle
Ihsane Hmamouchi (Rheumatologist and Epidemiologist, International University of Rabat) presents her paper “How might we understand the meaning of "care" in the age of Artificial Intelligence?” at the What Are We Talking About When We Talk About AI? Symposium. October 9, 2025 at the University of Connecticut Humanities Institute.
In this episode, Ihsane Hmamouchi examines the complexities of care in healthcare when the caregiver is an algorithm, chatbot, or otherwise digital companion instead of a human. By walking us through the profound implications for AI in healthcare, Hmamouchi emphasizes the need for inclusivity and equity in designing future models of care for patients of all backgrounds. Rather than flattening diversity into uniformity, AI can reveal how different languages and cultures express pain, understand illness, and ultimately experience healing. The future of care in the age of AI will mean inclusivity, cultural relevancy, and centering the patient’s voice at its core.
Episode 5 Transcript
Ihsane Hmamouchi: I want to thank the University of Connecticut and the Humanities Institute for bringing us together in this space of dialogue and reflection. So let me begin with a simple question for all of us. When you hear the word "care," what image comes to mind? For some, it may be a physician listening attentively to a patient. For others, perhaps a nurse offering reassurance or shared decisions made in trust and empathy. Or even the solidarity found within family and community. But, can we still imagine care where the caregiver is not a human being, but an algorithm, a chatbot or digital companion, like Anna Mae said? And this is not science fiction, it's already happening, AI is already here. Conversational agent responding to patient's case question at midnight, algorithm getting diagnosis and treatment decision, digital companion offering psychological support in way that challenge our traditional understanding of what it means to care. So the real question for us today is not whether AI belongs in healthcare, it already does. The question is this, can AI truly care? And if so, how do we design it to do so responsibly, inclusively and equitably? And in my opinion, care begins with language. Words are never neutral. They are the very texture of our trust. Words can heal, but they can also exclude. In our AI platforms, if our AI platforms speak only English, millions of patients are left unheard. For AI to truly care, it must learn to listen in many tongues, not flatten difference into uniformity. In our recent review, we showed how social media can serve as a powerful catalyst for amplifying the voices of groups often underrepresented in rheumatology. Yet, without deliberate linguistic and cultural adaptation, these same platforms can unintentionally perpetuate or even worsen existing iniquities. Our Arab adult arthritis awareness groups demonstrate that reverse dynamic could work. When campaigns were conducted in Arabic for Arab-speaking patients with narratives and imagery that resonates culturally, they reached more than a million people. Entire community that had long remained invisible in digital health, space finally felt seen, addressed and acknowledged. Language therefore is not merely a vehicle of communication, it's determinant of equity in care. And it's precisely here that artificial intelligence meets both its greatest potential and its most profound challenge. If we take this challenge seriously, these implications for AI in healthcare are profound. Imagine AI tools operating in our dialect, extending care to communities long underserved. Multilingual models could reveal how different languages express illness, pain, and healing, rather than flattening diversity into uniformity. Multimodal approaches could capture suffering conveyed through gesture, drawing, or oral history, not only through words. And locally validated datasets could counterbalance the dominance of Western corpora, as we see on this map, ensuring that AI reflects our own realities and contexts. But language is only one layer, the other is digital inclusion. When we do research in tele -rheumatology within the COVID pandemic, we showed how the digital divide shapes access to care across rural and urban areas, between between younger and older patients, men and women alike. AI can help bridge these gaps through voice-based systems for low-literacy users, mobile-first platforms for those without computers, and intuitive interfaces for older adults. Yet, without careful design, AI can do the opposite, demanding high-speed internet where only 3G exists, or assuming literacy level that many patients simply do not have. So, in short, AI can bridge the divide, but only if adaptation and validation are built into its very core. Care is not only what clinicians observe, it's what patients experience. This is why patient-reported outcome, the prompts, matter. They capture pain, fatigue and stigma that no laboratory tests can measure. And we showed how challenging yet transformative it can be to include diverse patient voices. PROMs shifts care from something done about patients to something built with them. And measurement too is never neutral. The metric system itself is a kind of social contract, a shared language that binds communities together. In healthcare, metrics connect patients, clinicians, policymakers, lawyers. So the choices we make in AI, whose data, which languages, what outcomes, are not merely technical, they are ethical contracts. AI holds real promises for supporting vulnerable populations, if it's designed responsibly. Early detection tools could identify atypical symptoms, patterns shaped by cultural or linguistic norms. Multilingual symptom checkers might interpret dialects, metaphors, or know standard expressions, giving voice to those too often excluded from digital platforms. Risk model could integrate social determinants, such as housing, trauma, or access to nutrition, moving beyond biology alone. And continuity of care platforms could ensure secure, portable health records for patients who navigate multiple systems or cities. And they are not abstract ideas, they are concrete opportunities for more inclusive care. But there is also a danger. Where AI is trained on narrow or biased data sets, it does not help, it harms. As a result, communities that had already been under treated were judged to need less care. We cannot allow AI to hardcore inequity to healthcare. So, how do we avoid exclusion? By committing to human-centered AI. This requires 3 key commitments. First, grounding systems in local and diverse data sources, not only in some country or hospital. Second, adopting co-design, where vulnerable patients actively help shape the tools, the language inputs, and the interface. And third, ensuring ethical oversight with transparency, accountability, and safeguards for fairness and autonomy. And they are not technical afterthoughts, they are the very condition for AI to care. And this leads to a central reflection. The question is not whether AI can care, but whether we as humans will design it to care responsibly. And yet, urgent ethical questions remain. Can systems trained primarily in some contexts truly serve patients in Africa, Asia or Latin America? What happens when algorithms recommend treatments without accounting for cultural realities or financial barriers? And how do we ensure that the voices of vulnerable patients are not lost in translation linguistically, culturally or digitally? So how do we define care through AI? Not by efficiency alone, but by equity, closing divides and dependence, language that should be clear, inclusive and culturally validated, and to embed the patient's voice at the core of the design. There is an old parable about a group of blind men who encounter an elephant. One feels the trunk, another the leg, another the ear. Each insists he knows what the elephant is, yet each hold only a fragment of the truth. Healthcare and AI are much the same. Doctors, patients, engineers, policy makers, humanities, philosophers, each sees a different part. So only through genuine collaboration we can begin to see the whole elephant, the full complex of reality of care. And thank you for your attention. I hope this talk has shown that AI can indeed redefine care and I look forward to continuing this conversation together next year in Morocco. Thank you very much.
Symposium
What are we talking about when we talk about AI? An International Symposium will take place on October 9, 2025 in the Humanities Institute Conference Room.
AI and the Human
Reading Between the Lines is part of UCHI’s initiative on AI and the Human, an interdisciplinary project fostering collaboration, research, and conversation on artificial intelligence.

