A Glossary for Human-Centered AI

Reading Between the Lines

“Reading Between the Lines: An Interdisciplinary Glossary for Human-Centered AI” brings an international cohort of humanists, engineers, and scientists from UConn and the Université Internationale de Rabat into conversation through an in-person symposium and a series of podcast dialogues illuminating how the definitions of terms associated with Artificial Intelligence vary widely by discipline, location, and language. The symposium and the podcasts will be structured to address the challenges that language and translation (both conceptual and linguistic) pose to collaboration on AI research.

This project is funded by The Consortium of Humanities Centers and Institutes’ Human Craft in the Age of Digital Technologies Initiative.

We often use the same words—like ‘learning’ or ‘intelligence’—when we are talking about AI, but what those words mean depend on our own academic and cultural background and the assumptions that accompany them. The humanities bring crucial insights about language and meaning that can help us to engage these gaps in constructive ways.

Anna Mae Duane, Director, UCHI

Podcast

“Reading Between the Lines” will explore questions of language and AI through four podcast episodes, each on a specific theme like education or justice.

Episode 1, Justice

Can artificial intelligence one day help to mitigate systemic inequality? As part of UConn's Human-Centered AI Initiative, we've brought together a roundtable of scholars from Connecticut to Morocco define “justice” within their discipline. It's only by understanding what we mean by the word justice, that we can begin to work together to deploy AI to create a more just and equitable world.

UConn Humanities Institute and professor of English, Director Anna Mae Duane leads a conversation about AI and justice, featuring Meriem Regragui, professor of law, Université Internationale de Rabat; Ting-An Lin, assistant professor of philosophy, University of Connecticut; Avijit Ghosh, applied policy researcher at Hugging Face and associate researcher at Riot Lab, University of Connecticut.

Episode 1 Transcript

Welcome, everyone, to our first podcast in a series dedicated to reading between the lines. “What are we talking about when we talk about artificial intelligence?” This series of podcasts is a collaboration between the University International de Rabat in Morocco and the University of Connecticut here in the U.S. It is the result of a Mellon-funded CHCI grant, which we’re putting the humanities and the sciences and law and medicine in the same room to think about what do the terms mean, when we are thinking about AI. In some ways, this is an anti-glossary. If a glossary is a place where we all decide on one definition and go from there, we are much more interested in leaning into the fact that we all have different definitions of terms that people feel are self-explanatory. And if we’re going to really have an expansive, inclusive, liberatory, emancipatory AI, which is all of our goals, we need to start with how do we talk about it. So I will introduce myself. I’m Anna Mae Duane, Director of the Humanities Institute at the University of Connecticut. And I’m going to ask our esteemed guests to introduce themselves briefly. Meriem, would you?

Thank you, Anna Mae. I’m Meriem Regragui, professor of law at the International University of Rabat, the school of law, and the deputy director of the school for global studies. I’m specializing in contracts and consumer law and with specific focus on social justice.

Hello everyone my name is Ting-An Lin, I’m an assistant professor of philosophy at the University of Connecticut. Before joining UConn, I was an interdisciplinary ethics postdoctoral fellow at Stanford. My specialty are in ethics, social political philosophy, and feminist philosophy, with a special focusing on how to address the unfair constraints that the social technical structure impose on different groups of people.

Hi everyone, I’m Avijit Ghosh. I’m an applied policy researcher at Hugging Face, where I focus on AI evaluations and US AI policy. And I’m also an associate researcher at the Riot Lab at UConn, where I work with Dr. Shiri Dori-Hacohen on a few different topics, ranging from morally aligning AI models and all the way to information access on social media and how that impacts fairness and ethics. Before coming to UConn, I was a PhD student at Northeastern, where my focus was on how fair models break down in the real world because of lack of information. So this is a very interesting topic to me.

Professor Duane: I should say our topic today is justice. Again, a word that has its own reverberations in law and computer science. I’m a literature professor. So I’m going to start just by jumping in and doing the definitional work. And so I’ll just ask everyone in a minute or two, how do you define justice in the work that you do and the discipline that you’re in? And I’ll just start as someone who has taught African-American literature and works in disability studies. “Justice” often has this connotation of repair, that you need sort of this historical background to bring to the present day to truly achieve justice and fairness, because we need to understand the injustice of the past to really engage with justice right now. And I’m just going to go around the people I’m seeing. So, Meriem, would you?

Professor Regragui: I think one of the major challenges for us is defining “justice” because it’s a concept that lies in the fact that it must be grounded in a moral or in a ethical framework, which can vary significantly across cultures and belief systems. Yet to despise these differences, a common thread can emerge and we can see that justice is commonly referred to as this mindset of this argumentary around the fundamental objectiveness to giving each person what they deserve, so basically the way I see it. And on the other hand, we can see it also as an institutional dimension, that is with the way that states enforces and organizes the rights and obligations of its citizens and maybe also in another way in another dimension, the one with the protection, as a social justice it’s more precise as we can maybe see it later how John Rawls have seen it, but we, I like the definition that has been done by Professor Yuko Itatsu, she’s a professor at the Tokyo University and she describes social justice as the mindset that’s driven with the assumption that all people regardless of their upbringing or their circumstance should be able to live with human dignity, so this is the emphasis on dignity, and which I found really crucial.  Professor Duane: I love that, that dignity is this cute word that’s emerging. Um, Ting would you mind weighing in as a philosopher?

Professor Lin: Sure, so in the discipline of philosophy, uh, which is my main area, the discussions surrounding justice used to be dominated on this perspective or approach to propose theories of justice that based on the more ideal scenarios. However, over the past few decades, feminist scholars and critical race theorists have criticized such a tendency. So this suggests that people like philosophers, like philosophizing from their armchair, they have spending so much time and just try to debate with each other about what a more ideal account of justice will look like. But our world is not like that. Our world is more a non- ideal. And so instead, they suggest that we prioritize focusing on and investigating some of the real world issues of injustice. People might disagree of what justice look like, but it’s easier to find and identify. And we agree that those are issues of injustice that we should pay attention to. So my research also, like, take this kind of perspective and one of the notions of injustice that I engage more deeply is this idea called structural injustice, which is coined and made popularized by Iris Marion Young, who is a feminist scholar and political theorist. According to Young, structural injustice is a type of moral wrong that’s imposed by social structure, or currently we might describe that as a social-technical structure. Structural injustice will exist when the impacts of the social-technical structure systemically impose some of the undeserved burdens, and burdens we can understand that very broadly, including like oppression, violence, or exploitation, etc., that impose those kind of burdens on some groups of people and while at the same time comfort unearned benefits or privilege to others. So according to this view of structural injustice, the goal is to address those kind of unfair constraints that social structure imposed on people and then through some of the collective actions to address that to make it like less unjust in the future.

Professor Duane: I love it Avijit, as our resident computer expert, tell us a little bit about how you see justice in your work.

Dr. Ghosh: Yeah, I’m glad you specifically said that because I was also going to mention that in the spirit of this being an anti-glossary, I would like to point out that legal definitions of justice, which again vary differently from everyone’s moral definition of justice. So for me, I think social justice equates to what we understand to be principles of equity, everyone should have the same opportunities as much as possible. Society should make sure that we should have similar outcomes if we put in similar efforts. And this is also coming from the fact that I grew up as a queer person in India, so like I didn’t and still do not have the same opportunities available as you know heterosexual people do in terms of marrying or having kids and property rights and whatnot, but that you know is very different from technically applying the concepts of equality in machine learning. And there are certain things that you can’t even do. Like, for example, it would be considered morally and ethically wrong to use machine learning to identify if somebody is queer from images of their faces. That came from a very controversial Stanford study, which, somebody tried to build a gaydar and for various other reasons. So even if you start off with a goal to make sure that you are scanning people’s faces to make sure that queer people in your study are getting the same outcome as straight people, in that process you might be violating other aspects of their existence like inadvertently outing them to their workplace which might have other impacts. So you know again applying certain notions of social justice might not even be possible or feasible without self-identification of the sort, which is a problem I ran up against a lot during my PhD. So like I tried to measure whether fair ranking was actually working in practice using LinkedIn’s algorithm. And I realized that, even though the algorithm’s mathematical definition made sure that people from different subgroups, like different races and genders would be equalized in terms of their, their relative ranking in the end, that kind of re-ranking was not even possible if people did not self-identify their race or gender. So my understanding is that companies were using people’s faces or their last names or their zip codes to sort of identify demographic information about them, which was not entirely correct and was misidentifying people. And finally, we come to the different definitions of equality or what different notions of justice look like in different judicial systems, which sort of then influence also the technical definition. So, for instance, in the US, you have the notions of, you know, you can’t do something that is called “race- norming” because the Supreme Court has disallowed it. So in the process of ensuring fair outcomes, you cannot treat people differentially. That has been established by the Supreme Court. But then in other jurisdictions like in India, for instance, you are allowed quotas for like minorities, for women, for people from the scheduled caste, scheduled tribes, and that is in the constitution, and therefore, any sort of like, you know, resource apportioning algorithm would want to take that definition in the law into account. And so writing a fair algorithm that supports Indian law looks very different from the one that follows US norms. Um, so it’s it’s very different based on where you are, based on what your local norms are. Um, I personally think that algorithms should not be you know copy pasted across borders, it should be context dependent, and that context should also include the laws of the country that you’re operating in.

Professor Duane: Oh that’s so important and you just led us to a beautiful segue because another part of our anti- glossary here is that we have folks from all over the world, this is a collaboration. I’m sitting here, I’m at the University of Connecticut, but I’m sitting in Morocco. And how the legal structures interpret justice completely differently, as we add on the sort of social, cultural, individual aspects, we have sort of completely different laws. And to bring this to AI, as we’re thinking, as you already spoke so eloquently about, Avijit, uhm so often, I think AI, or at least the way we think about it, it’s a largely Western construction and Western ideas of justice itself. And just to sort of pull together what folks were saying, I think there’s a Western bias in AI as we’re trying to develop regulations, as we’re holding new platforms, um thinking about, um the ways we just, and, and uh Ting spoke to this and Meriem spoke to this, and Avijit your point about this idea that if we treat everyone the same, if we pretend if everyone is equal then that becomes justice and uh certainly with AI, there’s this tendency to flatten folks, right, that there’s this data and that it’s, it’s you know statistically what’s most likely, is what’s going to be the output. And your example was so perfect, Avijit, how do we take in the diversity and the fact that what is justice for one person is not justice for another person and move that into our programming. And I think, Meriem, you mentioned John Rawls and just to sort of tie some of these things together, he talks about the “veil of ignorance.” Imagine as if you have no identity, which already starts us off, I think, at a disadvantage. Or just imagine yourself in several different identities and what would you want for that person and it’s, it’s sort of this act of empathy, but you’re also imagining what that person would want of course always from your own position possibly of privilege. And he suggests that like, if we don’t, you know if we don’t have any position, then we will choose justly for everyone and I think what I’m really interested in thinking about is, what do we do with this idea when it comes to artificial intelligence, in terms of property rights, in terms of social justice, because I do think, at least on the technical side, there can be this presumption that we’re doing things equally, that everyone is being coded equally, it’s the most likely outcome. If we have the majority thinking one way, or most likely to react one way, then that’s how AI is coded. And then these gestures towards diversity can wind up actually outing people or sort of making them an anomaly. So I, I wanted to think about AI from two different ideas of justice as well. Martha Nussbaum thinks about the social contract as basically flawed because it imagines there’s two equal people, uh right, and you make a contract because I can hurt you, you can hurt me, I have something you want, you have something I want. But of course, right, that’s not how society functions. And I think in terms of as we’re thinking about justice and artificial intelligence and large language models, how do we imagine the ways in which we’re not equal, right, in which that social contract doesn’t work as a way to think about how we program, how we elicit. And I think particularly, I’m wondering, I am sitting here in Morocco, and you’ve already spoken about India, Avijit. What happens when we’re dealing in a global context in which people do not have the same aspects of access, literacy, and then, of course, the default that so much of artificial intelligence is thought about and programmed, and the data comes from English-speaking societies. So how do we sort of accommodate this, which everyone already kind of spoke to really beautifully, this idea of we need to realize that there isn’t equality, that we’re not all data bits that can be, you know, interchanged while sort of avoiding what Avijit was talking about, in which people are being sort of either sidelined, sort of an exception case to this generalized intelligence. There’s this dream that there’s going to be this universal intelligence that’s going to accommodate all of human nature equally and fairly. We have our doubts, and so I want to just throw one more key word into the mix and then sort of ask your thoughts about it. “Decolonial AI justice,” which challenges, you know, both Rawls and Nussbaum in some ways. And I’m going to hand this over to the experts in the middle. My understanding of it is that AI systems are no different, unless we’re very careful about it, than other systems in ways that are going to just reinstitute, perpetuate historical imbalances that were rooted in colonialism. And so one argument is that we need to de-center Western ideals of justice, Western ideals of fairness and sameness, and include other philosophies, other epistemologies. From the very beginning, our colleague, Shiri Dori-Hacohen, talks about expansive AI, that we need different perspectives, not just disciplinarily, but globally from the very beginning as we’re programming AI. So I’m wondering for each of you, how in your own work, are there hopeful stories of ways in which you see really exciting moments in which decolonial justice is operating, or trying to be implemented, or any challenges you see with this sort of more expansive vision of justice? I was just so struck by the way when you were all defining your own visions of justice, everyone had this, we think it’s one thing, we think it’s equality, but if you really want justice, you have to acknowledge that we’re not starting at equal moments, that we are not the same, and that actually the same result is going to be injustice for some people. So I’m wondering in your own work, how are you trying to implement this version of justice? And I’ll start with Meriem again.

Professor Regragui: Okay, well so, the term justice, as it is so polysemous, in my research, I often try to identify its presence across a wide spectrum of legal phenomena. And I began by exploring how justice manifests in contract law, primarily through the notion of contractual balance, which refers to the fairness between contracts and parties in terms of rights and obligations. And I discovered that in certain legal systems, notably the civil law tradition rooted in the Romano-Germanic model, balance is not considered a condition for the validity of a contract. And in contrast, common law systems, especially in the US case law, they developed a concept quite foreign to the civil tradition, which is the doctrine of unconscionable contracts. And I found it very interesting because this doctrine allows judges to intervene when a contract is significantly unbalanced or contains an abusive clause in order to restore a sense of fairness. However, in our civil law system, judges remain bound by the letter of the law and the sanctity of the party’s agreement. They are not empowered to question the substantive fairness of the contract. And contractual freedom is treated as a foundational principle of what two parties agree to, only they can undo. So I found in common law an instrument that we lack presently, this is the legal mechanism for ensuring contractual justice. But contractual justice cannot be reduced to mere balance between contracting parties narrowly defined. In reality, contracts produce externalities, both positive and negative. And some agreements such as employment contracts, rental agreements, or medical consent forms can have life-altering consequences. These are not just economic tools, but social institutions. Their impact extends beyond the two parties involved and disproportionately affects the more vulnerable party. And in this sense, certain contracts are key pillars of socioeconomic stability or conversely, vectors of inequality. So this is precisely where contractual justice meets social justice. Now, what does this have to do with AI? And in my point of view, recent technological advances offer us an opportunity to rethink how fairness is built into contracts upstream. I believe AI could be used to design more equitable contracts by default, whether it is in B2C or B2B context. If AI can help us detect imbalances, flag harmful clauses, and it will simulate social impact before a contract is signed, we could mitigate systemic injustice at its source. This could profoundly affect the socioeconomic status of vulnerable populations and move us closer to a more socially and economically just society. So in response to the question of what other definitions of justice we should consider, I would suggest that the contractual justice that is ensuring that the performances exchanged between parties are fair and reasonable, it must be part of our discussion. And moreover, we must also acknowledge its direct and indirect relationship with broader social justice, especially when designing AI tools that increasingly influence our legal and economic environments.

Professor Duane: Thank you so much. If I could just ask one follow up question. I do wonder how AI can spot these vulnerabilities in terms of who’s programming it and sort of what, how do we sort of identify what flags to look for? Because, I mean Avijit’s example is like, that could go wrong in many ways, right, that you’re flagging that this is unbalanced based on you know identifiers that maybe aren’t relevant or maybe actually do harm. So I’m wondering if you’ve heard examples of that as a challenge or ways people look to deal with it.

Professor Regragui: Absolutely. Absolutely, it’s a great challenge because it depends on how we program the AI and how we use it. So, since we are still attached to our old notion of fairness, old notion of contractual justice, nothing will change, even if AI evolves, even if we are currently experiencing a great development of technology. But on the contrary, if we tend to change the way we program AI in order to aim towards this social justice and this contractual justice, maybe we will have the capacity to have a more fair contract between, let’s say, between employers and employees, between the medical staff and the patients, etc. So, it has to be thought beforehand, because it’s the only way I see we can provide a more balanced and fair contracts in the market, in a larger way and specifically between particulars.

Professor Duane: Yes. Oh, thank you. I know both Ting and Avijit have thought about exactly these issues. Well, whoever wants to jump in first, both sort of your ideas of de-centering Western justice in AI, but also responding to Meriem’s like really excellent point, abut how do we achieve balance and identify vulnerabilities?

Dr. Ghosh: I mean, I can take a stab and I’m sure Ting’s perspective will come from a completely different angle, so we’ll compliment each other. For me, machine learning research has been bottlenecked by availability of resources for the longest time. So, for the longest time, their talent is spread everywhere across the world, like I have no doubts about that. What I do see as a problem is that GPU access, for instance, is concentrated in American and some Western European UK institutions, meaning that even if we all know that data sets are heavily Western biased, it’s not like something we feel is true, in fact, there has been systematic studies. So, I encourage the listeners to look up the Data Provenance Initiative. It’s a project by my friend Shane at MIT who, they like went through thousands and thousands of data sets and, and plotted a world map and most machine learning data sets are coming from the Global North. So, when we do know that it’s true, how do we alleviate that? Thankfully this year, I mean I would say like starting 2024 onwards, there has been a slow but steady move of decentralization of both data sets and models and in fact in 2025, we are seeing an innovation, you know, boost towards smaller and smaller machine learning models that run locally on your computer or that runs on GPU resources that are relatively cheaper and can run on existing hardware. As a consequence, we are also now seeing models come out of our fine-tuned models, come out of say India, come out of Middle East. There is a project out of Dubai, although Dubai is not a great example because they are not resource poor. But again, we are seeing initiatives like the Falcon family of models coming out of the Middle East. We are seeing different Indian institutions producing their own models. China has been producing a lot of cheap models that run locally on your computer. So I would say that resource access equalization has been a big bottleneck that is starting to be improved. So, the world now doesn’t constantly depend on what OpenAI will do and let that dictate the fate of the AI research field. And consequently, anyone who is impacted downstream, people are taking matters into their own hands. More and more public institutions are releasing AI-ready data sets. So there has been initiative from different governments, US departments, so NASA releases data on Hugging Face, where I work, for instance. But also now we are seeing a tendency for international government. So the National Library of Norway releases data on Hugging Face. You have the French Ministry of Culture that released a very detailed French language conversation data set that can help models converse better in French. And I’m hoping that with the release of more open source scripts to do so, more and more small organizations and institutions across the Global South will now be able to contribute in the same way, that’s part one of the equation. Part two is better and more open evaluations. Again, like circa 2023 when [Chat]GPT first came out, well, 2023 when [Chat]GPT first came out, people didn’t really know what the model had been tested against before release, and for-profit companies don’t really have an incentive to do so. Like, they want you to use their model, they will not actively talk about the fact that it has negative impacts or negative evaluation results. With the advent of open source models catching up to closed source models, there’s also been a rise of an incredible diverse evaluation testing system. Evaluation and, and evaluation like just a testing ecosystem. So you have leaderboards, you have benchmarks coming out of different companies, including like in Indian languages, including like say, I think they’re like misinformation, detecting data sets in Arabic, for instance, which is incredibly useful. And these are coming out of small local organizations that can now do this type of testing more easily on open data sets. And I think people are waking up to the fact that evaluations are extremely necessary for trustworthiness in models. I’m leading like this 150+ researcher coalition called the EvalEval Coalition, or critically evaluating evaluations itself, like the field itself, and we are seeing incredible progress there, so we are working on standardizing evaluation documentation, writing about eval science, like what constitutes a good evaluation, and just generally I think these types of evaluations of whether a model is biased, whether a model is, is doing the same things in all languages, whether the model is giving misinformation in like asking for legal outputs and whatnot, um uh we, our team at Hugging Face including external researchers released a dataset called Shades. So, I think some of us know already about the Gender Shades paper that was quite monumental in the ethics field. This is sort of following that same trend of releasing stereotype datasets, so we’re releasing stereotypes across different languages. So evaluations are the second piece of the puzzle.

Professor Duane: That’s fantastic, and we will add it to the show notes. We’re already building this incredible reading list. So what I’m hearing is that we need diverse people, diverse perspectives from the beginning and that’s just so exciting to sort of get that overview of the ways that that’s happening. Ting, I’m going to give you the last word as our philosopher, who thinks of all of life. So what would you add?

Professor Lin: Yeah no um, I think I will add something like, I mean as a philosopher I work more on the, kind of a theoretical side, but um I do want to, I really like the other previous points that every one of you that pointed out, and then tied it to Anna Mae’s original questions about how those different notions of justice, how do we see them? Are they competing, or how should we prioritize them and related to AI? And I wanted to propose this idea, as I mentioned earlier, that my work is hugely inspired by this notion of structural injustice. I really wanted to suggest that this perspective, when we think about social technical structure, and we understand that in a broader way, it opens up this opportunity to see these different notions of justice, not as competing to each other, but rather they are just like a complementing each other, and give us a more like a pluralist picture of both understanding justice, but also identifying potential aspects of injustice. So the core of structural injustice is about how different practices and how the social structure work, how they shape the power relations between people. And then AI now, as this kind of a very powerful emergent technology, it is not something that just like emerged, something like magically, right? It is situated there in our material world and then as Avijit pointed out, that it is shaped, it’s constrained by some of its materiality about what kind of data are there and how the labors are being currently, like, conditioned into play some role in shaping AI and so, I think that with the AI become like a more widely used in the global world, we can also think about how it’s like further deployment, shape the power dynamics. So I think that we can, based on this kind of a perspective and this picture, we can like remind us when we are thinking about different aspects of justice surrounding AI, we should not only be focusing on, for example, AI’s impacts in relation to how it shapes the resource distribution, which is this kind of a Rawlsian notion of justice, and that has been still, I think, a very dominant focus in the literature of algorithmic fairness or algorithmic justice. But, if we have this kind of structural perspective, we know that some other aspects, for example, how AI shapes our understanding of the world, or in other words how AI shapes the social schema or social meanings that we ascribe to different features or different groups, that also plays a huge role, and that echoes with like Avijit’s earlier point about, for example, how queer people or maybe like people of color are currently represented by the output, like the text or pictures that are generated by AI systems and that also can be shaped for example, just like very simply what kind of accents or writing styles, that say ChatGPT or like voice assistants, they adopted and how might the users, like interacting with that, shape our narratives or our values surrounding those like practices, which might seem simple, but many of our understanding of the world is like shaped by all this like, a daily, like trivialities. And relatedly, I think it also connects with the power relations that are not just focusing on the outcome, but also in the process behind AI designing. So, that’s pretty much related to the idea of like a decolonial philosophy and I wanted to point it out and shout out to many scholars in AI research, they like pioneer this idea or like, move forward with this idea of like a participatory design that ensure more democratic and equitable power relations and different stakeholders that in different parts of the world, then included their perspective for being like a well represented in a design and I think that’s not an easy question, but that’s the way to go.

Professor Duane: No, no, and that’s an incredibly rich answer and I’m just so inspired and thrilled by the ways that everybody sort of picked up one another and we could continue this conversation for another two hours I am sure, but we are unfortunately out of time. We will have a reading list that accompanies this, will be available to our readers. This is a rich and growing, uh space to be thinking about and a vital one. And I just would like to thank everyone for sharing your time and expertise as we sort of move forward with this emerging technology and where it’s going to take us. Thank you, everyone.

Episode 2, Education



Episode 2 Transcript

Anna Mae Duane: Hi, welcome to “What Are We Talking About When We Talk About AI?” It’s a project that is a collaboration between the International University of Rabat in Morocco and the University of Connecticut. This project is a result of an idea we had that we need something akin to an anti-glossary when it comes to artificial intelligence. We’re all talking about it, but we’re all talking about it from different perspectives. And so some of these terms, whereas a glossary, we create one definition that everyone sort of agrees on, we’re leaning away from that because we think it’s important to understand that these key terms mean something completely different depending on what your perspective is, what your discipline is, and that we need to understand and incorporate this multiplicity of meaning as we move forward with this complex technology that’s taking us forward. And so today we will be talking about one word, “education,” which artificial intelligence is being hyped up in many, many realms. But we all know, everyone here in this room is an educator. And so we’re really aware of the ways we’re, perhaps, not prepared for how AI might change what we mean when we talk about the word education. And I’m very excited, I have a really rich panel of guests from all over the world, from different disciplines. and I’m going to just jump right in. I’ll introduce myself briefly. I’m Anna Mae Duane. I’m a literature professor and also the director of the Humanities Institute at the University of Connecticut. And I’ll ask my guests to introduce themselves.

Thank you so much. I’m Najia Hajjaj Hassouni, I’m the dean of the College of Health Sciences at UIR, Morocco. And I’m also the former dean of the medical school here at UIR and former dean as well of the medical school in Rabat.

Professor Duane: Thank you and welcome.

Hello, my name is Ouassim Karrakchou. So I have actually here, two hats. My first hat is a research hat. So I’m the deputy director of TICLab, which is the research lab of UIR specialized on AI applications. And I’m also, as an educator, I’m a professor of computer science at the School of Computer Science here at UIR. Thank you.

Professor Duane: We like people with a lot of hats. That’s kind of the name of the game. Let’s have a representative from the Nutmeg State. Tina, could you please introduce yourself?

Professor Huey: Absolutely. Thanks, Anna Mae, for inviting me to be part of this today. So at the University of Connecticut, we have a Center for Teaching and Learning, actually the Center for Excellence in Teaching and Learning, where I serve as the Interim Director of Faculty Development and lead AI and pedagogy initiatives. I’ve also taught writing in an academic context for more than 15 years. And I research the use of discussion activities to support student inquiry and critical thinking.

Professor Duane: Wonderful. Thank you. And Meriem, could you please introduce yourself?

Well, hi, everyone. I’m Meriem Regragui, professor of law at the School of Law at the International University of Rabat and Deputy Director of the Center for Global Studies, specializing in contract law and consumer law with a special focus on social justice.

Professor Duane: Thank you so much and we’re just gonna jump right in with the theme of this podcast, which is definitions or anti-definitions. And we’re just going to do a lightning round to start us off of how you define education. How does education or a definition of education that feels most essential to the work that you’re doing now-- what does it mean to educate in your field? And I’ll start again. I’m an English professor as well as a humanities director. And it’s become one way education, we sort of feel like we’ve done a good job, is if our students can write in a professional way and AI has sort of changed the terms of that game. For us, we think writing is thinking so we’re a little concerned about the way that’s being offshored so I think, like in my field as so many others the work of education we’re really having to think like what is essential about learning, about literature at this moment and I’m going to turn it over to the dean please.

Dean Hajjaj Hassouni: Well I think that today AI is really expanding in every field as well as the field of health care and it helps improve the quality of care and it is also at the heart of the future of medicine with robotics and assisted surgeries, remote patient monitoring, smart prosthetics, and personalized treatments through data cross-referencing, and I think that medical education is really very important today when considering AI, because it must therefore adapt to a professional context that is undergoing constant change and if we come to the definition of education from a medical point of view, I think that education can be considered as referring to the process of acquiring knowledge skills, attitude necessary, to become a competent professional in health care. To the problem of competency, how to acquire competency, but as well how to deal with people, to remain human in a technical way and in a technical period on these very particular times is also very very important.

Professor Duane: I love that. How do we remain human in a technical age? I think that’s going to be key to all of us. Meriem, could you please give us your lightning round?

Professor Regragui: Thank you. For me, in the most simple way, I would define education as the transmission of knowledge, of practical skills and soft skills, to which we can add also a certain set of values and principles. But in the legal field and the legal education, to educate is not merely to transmit rules or doctrines, but to train a specific way of thinking. So what we often call the “legal reasoning.” So this includes learning how to comprehend and how to interpret texts, analyze arguments, resolve conflicts between norms and for most of the time between people, and articulate fair solutions within complex frameworks. So legal education also involves the understanding of how legal systems are shaped and to what aims they are oriented. So educating law students means preparing them not only to apply law, but to understand its functioning and its mere function, to question it through history, comparative law, and critical theory. So in short, education and law is as much about knowledge accumulation as it is about intelligence and ethics in its understanding.

Professor Duane: I love that. And I like the understanding, right? And that it’s contextual as well as taking in the knowledge. It’s having this capacity to sort of complicate it. Ouassim, would you please give us your lightning round definition of education?

Professor Karrakchou: Yeah, so uh, I would say in general, education in I would say in universities, aims to prepare graduates to the needs of the job market so when we say preparing them for the needs of the job market we talk about a certain number of skills that they need to acquire, that are needed by the jobs that they, that they aspire to have. And so in the context of computer science, there is one type of jobs that I think were very disrupted by AI, which are the jobs related to programming. Because as we saw with the rise of ChatGPT, a lot of students, they can ask ChatGPT for coding simple tasks. And so it’s actually questions how we better prepare our graduates for the evolving needs of the job market in the age of AI.

Professor Duane: Thank you. And I’m so glad you brought up the job market because let’s just be real, like that’s the aim of education for all of us. And so, as AI threatens or promises to replace some possible skills, I think by thinking about what skills can humans do that will sort of utilize AI and not be replaced by it. Tina, you think about education all the time as a universal question, so I’m really excited to hear yours.

Professor Huey: Okay, so I have those two hats that you alluded to. I’ll start with my definition of education as a teacher of first-year writing in a university. So I teach undergraduates. This class, in an American context, used to be called freshman composition, and it was organized, or I guess education was defined as inculcating expressive norms. These norms, I think it’s worth mentioning, were biased towards the expressive habits of certain groups of people. And now the introductory academic writing class, at least at UConn, is organized around multilingual classrooms that value various ways of using English and thinking with and through the language or languages that one has at one’s disposal, with the purpose of recognizing how language and communication shapes the assumptions about what is possible in the world and what is impossible or unheard of. In my role as a faculty developer, I’ll say, the definition that feels most essential is it’s a practical toolkit for instructors to teach students, but that it’s predicated on the instructor engaging in self-reflection or self-education.

Professor Duane: Oh, I love that the instructor is also very much a part of not only being an educator, but being educated, reciprocally from the students. I think that’s really fantastic. We’ve taken these diverse definitions and think about perhaps concerns that we all share as educators. I think, at least correct me if I’m wrong, and you have students that aren’t engaging with AI in ways that perhaps are indiscriminate. But I find students are increasingly struggling with sustained attention, with deep reading, and AI promises to fill the gap. Again, I’m a literature professor. It can summarize the book. It can write an essay. In terms of knowledge accumulation, it’s got more data than we can ever have. And it will provide instant and confident answers to complex questions, even if it’s wrong, but is very confident. And I’m struck by some research done by Avijit Ghosh, who’s at UConn and who was on another podcast. So you’ll have to check that out. But he just did a study on AI’s cognitive impacts. And that study indicated that there’s a concerning pattern. They use the term “cognitive offloading” in which students are delegating thinking tasks to AI rather than developing their own analytical capabilities. And because part of our AI glossary or part of our sort of taking on different meanings is not just sort of disciplinary or in a semantic form, but also in a global form, I’m just struck also with the ways that AI occurs within a broader context of educational inequality and cultural dominance. Education, as we all know, has never been neutral. Paulo Freire famously distinguished between what he calls “banking education,” right, where you sort of put knowledge in the bank of the student’s brain, right, and then they put it out. And then “problem-opposing education” that develops critical consciousness about our topics, but also larger social realities and inequities. And so if we’re going to think about if education can be the practice of freedom and it allows us to think critically, which so many of our guests have already spoken to. So I guess my question is, AI is an educational tool. How, in your experience, does it sort of reinforce the knowledge that’s already there and what sort of knowledge it omits or downgrades? And I’m just wondering, I’d love to get your response in your experience in thinking about AI as an educator. Do you see it as feeding into this banking model in which, you know, sort of students just sort of input and output? And so why not AI is a quicker outputter? That’s a technical term, outputter. And is it flattening the capacity for questioning and dialogue reciprocity? Or have you found ways that it’s really sort of illuminating new possibilities in education? So all that to say, how do you see in your field how AI is reshaping how students learn and your thoughts on it? And I’m going to go in a different order today. I’m going to ask Ouassim first, for your thoughts on what’s happening with AI in your classrooms.

Professor Karrakchou: Yeah, thank you, Anna Mae for your question. Actually, I think, since the advent of ChatGPT, it’s had really a huge impact on the behavior of students in my classes. Because I noticed that for a lot of them, the first, I would say, reflex that they do when they are facing a computer science problem is to go ask ChatGPT, right? And so I think this is actually, I think, relatively dangerous in terms of the objective of education, which is for them to acquire skills, because they end up, I would say, becoming dependent on ChatGPT and they never acquire some skills that are needed for them to become good, employable computer scientists. And so what I started to do is I explored several options. And the first option or the easiest option would be to ban, let’s say, using ChatGPT at school. And I think this is not realistic and maybe it may even be counterproductive. Because ChatGPT is here and people will use it at some point or another. So the the idea that I decided to explore is to see if there are ways to make the students aware of the limitations of ChatGPT as you say, Anna Mae, sometimes the, the AI may not, I would say, may do mistakes or may even do mistakes and state them very confidently and in the context of computer science this could be actually a leverage, because you can ask them to create some programs that have to interface with other programs and so whenever there is actually a mistake, even though like the AI will state it, let’s say very confidently, but the program by itself won’t work with the other program. So if you ask the students, okay give me something that has to interface with something as to communicate with another program. If they don’t do exactly what is expected to, uh to I would say interface correctly, even though ChatGPT will, will give them something that I would say generally will relatively work, it it won’t be, it won’t be correct in the context of this exercise, so they have to kind of think about how to, to do it and even though they may use ChatGPT they have to explain exactly what ChatGPT needs to do to make this work, and so they, that’s how I think we, we need to make them I would say, aware of the limitations of the current AI and develop a critical thinking that allows them to actually maybe use AI as a tool to help them be more productive and I would say, work around this limitation and do it in an intelligent way. So that’s why to do this, they need to know what they are doing in a way. They need to develop this understanding.

Professor Duane: Yeah, it just strikes me that it’s going to be a self-fulfilling prophecy if students are already giving away their skills to AI as they’re getting educated. Yes, of course, AI can do your job because you haven’t learned how to do it. What’s your response to this question?

Dean Hajjaj Hassouni: The words you have used are very important, such as, for example, “banking education,” “problem-posing education” and “practice of freedom.” And if we think about AI in the health sciences field, we have to see and to know that it is currently reshaping learning and education into something more personalized, efficient, and more accessible. That has been the case during the COVID-19 period. And it’s also the case for some underdeveloped countries, such as in our continent, Africa, which is a very important way to be educated and to learn. But it also raises questions about autonomy, about BAs, and educational values. And it is important to take care about ethics in health sciences and in healthcare, obviously. And the future also lies in thoughtful integration. Using artificial intelligence will not replace human educators, hopefully, hopefully. But it helps to amplify human education. And if we see what is happening today in the medical field or health sciences field, we have to see really that artificial intelligence has improved a lot of things for students. For example, access to information for teachers, teaching techniques, which are more efficient today, assessment techniques as well, and the automatic generation of questions and the computer adaptive testing are today developing very rapidly and are very promising. And we have also new tools, which are gradually being introduced into medical education, such as, for example, to help developing clinical skills. We have medical image analysis, which has improved really to an extent which was really unexpected, as well as, for example, organizing clinical visits with artificial intelligence, simulated patients on computers, which helps, for example, in faculties where there are not enough sites for training to help students to have this training and to be able to practice in a virtual way that doesn’t replace patients, but that can help to develop competencies. And that has led to medical simulation. And we know today that we have not to practice the first time on real patients, but to practice on simulation ways. And that has been really a very important improvement in the medical fields. And we have as well artificial intelligence in evaluation and different way of evaluation. Evaluation of the teaching of teachers, the evaluation as well of the students, and the evaluation as well, how to for example for residency, how to deal with an important field and sometimes it’s very important to have the answer in a few minutes and that is possible today. And some faculties have also began to make the first new students into the faculties only through their file, the educating file, and that improve the rate of subscriptions, of dealing with students, so the human way and ethics in healthcare remain very important.

Professor Duane: Thank you, that’s fascinating. I do like the idea of making a mistake on an artificial patient rather than a real one. Tina, would you weigh in on your thoughts here about banking versus freedom?

Professor Huey: Yes, I can. I will also echo the point about making mistakes on virtual patients or simulated patients. Yeah, I really like this concept of “cognitive offloading,” right, to describe what students do and what we all do to get through our days when trying to do intellectual work. And to some extent, students have always done that, right? In rhetoric and composition, one way that shows up is in what’s sometimes called “patch writing,” which is when students assemble quotes from the assigned readings in a way that doesn’t fully engage the project of the author of that work. But it manages to kind of lend a semblance of veracity to the student’s essay that doesn’t ultimately hold up to scrutiny. And so I’d argue that students are cognitively offloading onto the quotes, right, and they’ve always done that. And that often comes from intellectual overwhelm. Writing scholars talk about how reading requires stamina. Cognitive offloading now through AI may be a response to lack of reading skills, for example, as well as lack of stamina for reading. So there’s something about the structure of higher education and its institutions that constrains students’ engagement with the text. And I think it leads to that dismal statistic that you shared offline. I think it was, Anna Mae, from Ezra Klein, right, about low reading levels. And AI will exacerbate what’s already happening, I think, because of its conversational tone and the fact that students ascribe credibility to it, right? They say, I could never do it as well as ChatGPT, so why even try? So AI has the potential to benefit learning and increase stamina, as well as exacerbate these other practices and behaviors that have always existed. AI can instantly generate responses that represent ideas in diverse ways, and these can speak to a student’s way of thinking or to a student’s existing knowledge. So it could lead to a golden age of reading, but it won’t do that if instructors don’t, I think, center it in the writing classroom. We need to center it as an object for the class to analyze and discuss the broader context of AI, the bias in its algorithms, the text it was trained on, disparate access to computational resources and subscription levels and tiers and all of that. All of which will reinforce a kind of learning, a certain kind of learning.

Professor Duane: Thank you. That’s so fascinating. I certainly, as a lit professor, hope that we get a golden age of reading. But I do say, right, it is in some ways, it’s Cliff’s Notes on steroids at this point. But I’m inspired by your thought of like, if it’s a text and if we’re literate about how it works and to use a term from other podcasts to demystify what’s happening, it can be really an accelerant.

Professor Huey: 100%.

Professor Duane: Meriem, you are going to get to be our last word. And I think, again, in terms of legal education and talking about reading as endurance or as stamina. I know from friends of mine in the legal profession that there’s so much to take in, there’s always so much reading. So I’m wondering in legal education, how you see AI either contributing to the banking model or opening up other possibilities?

Professor Regragui: Absolutely. It’s a great problematic for us too, because I think that that concern about this notion of banking education is absolutely valid. It’s especially acute in the legal educations because law is a discipline where the temptation to offload cognition is very strong. AI is now capable of drafting contracts, summarizing cases, simulating legal arguments, and even predict the final outcome of a trial. But what’s lost here is, in that process is essential. It’s the deep internalization of legal structure and students’ own ability to construct a personal thought and meaning through a spontaneous navigation, such as reading into dense texts, spotting contradictions and defending interpretations through a specific angle. So when it comes to the use of AI in the legal education, I think that legal education by itself already runs the risk of being reduced to memorization or exam performance, especially in standardized systems. And if we indiscriminately integrate AI without redesigning pedagogy, we risk reinforcing this banking model Paulo Freire worried about, and where knowledge is downloaded but not built. So yet there is also opportunity here. If we use deliberately AI, we could support a problem-posing pedagogy, allowing students to test hypotheses, maybe explore different legal traditions or simulate courtrooms’ dynamics. Those exercises could maintain their capacity to develop a problem-posing education. We are already using tools which have nothing to do with AI that let students compare how the same legal question is addressed either in Moroccan constitutional law, Islamic jurisprudence, or European civil law tradition. So technically, we have the means to develop that kind of problem-posing education. So maybe one of the best ways of using AI in legal pedagogy is to make it more efficient, both for knowledge transmission and for student- useful education. And the best way, in my opinion, is to try to, all the multiple educational techniques as metacognition, where students explain by themselves how they use AI for learning, or the degraded theory of education that consists of observing how subjects react to learning with AI without any previous prejudice. And after that observation, educators can see what works and what doesn’t, and they are able to adapt the tool to the subject of knowledge. So the key here is that educators can develop and shape the best ways to embrace the use of AI in the most beneficial way in the legal education.

Professor Duane: Thank you so much. I think that’s a great hopeful note to end on that we as educators, we got in this business because we like learning and AI is going to put us to that test because we all have to re-educate ourselves to figure out how best to keep the critical freedom oriented aspect of education, the one that we are working towards. Thank you, everyone. Thank you to all my guests. And we will be hitting you all up for reading recommendations for our readers. And this is the first of many conversations on this topic. Thank you so much.

Symposium

What are we talking about when we talk about AI? An International Symposium, will take place on October 9, 2025 in the Humanities Institute Conference Room.

Register to attend

AI and the Human

Reading Between the Lines is part of UCHI’s initiative on AI and the Human, an interdisciplinary project fostering collaboration, research, and conversation on artificial intelligence.

Learn more