ΑΙhub.org
podcast
 

New voices in AI: ethical AI with Oumaima Hajri

by
14 June 2022



share this:

Welcome to episode 6 of New voices in AI

This episode Oumaima Hajri shares her work and journey in ethical AI, and her own web series AI better world.

You can find out more about Oumaima’s latest work on Twitter, and more about AI Better World here.

All episodes of New voices in AI are available here

The music used is ‘Wholesome’ by Kevin MacLeod, Licensed under Creative Commons



transcript

Daly: Hello and welcome to new voices in AI, the series from AI Hub where we celebrate the voices of Masters and PhD students, early career researchers, and those with a new perspective on AI. I am Joe Daly engagement manager for AI hub and this week I am talking to Omaima Hajri about all of the things that she is working on. Without any further ado, let’s begin.

First of all, thank you so much for joining us today and taking some time to talk to us for new voices in AI, and if you could just introduce yourself, who are you? Where are you? Where are you kind of working at the moment?

Hajri: Yeah, of course, thank you for having me my name is Oumaima Hajri, I’m from Amsterdam, the Netherlands with Moroccan roots – very important detail. At the moment I work as a researcher at Rotterdam University of Applied Sciences and I focus mainly on research within the sphere of responsible AI and at the same time I’m also doing a master of studies at the university in Cambridge in AI, Ethics and society, and it’s basically the first cohort of this degree, and it’s a multidisciplinary degree that focuses on responsible and ethical AI from different disciplines and perspectives. So yeah, that’s me.

Daly: So it’s like brand, brand new cohort is doing this work.

Hajri: Yeah it is, yeah, very exciting.

Daly: Brilliant, we kind of mentioned a little bit about your kind of general area, what kind of things are you working on at the moment?

Hajri: Yeah, so at the moment I’m work I’m working at a project with media organizations here in the Netherlands and they are willing to apply certain AI applications within the world that that they do, and basically what we’re doing is trying to help them to implement these applications in a ethical and responsible way. So we’re trying to help them within the whole process from like the idea generation until the application, specifically, more from an ethical perspective. So even like the questions that you ask yourselves to also generate some kind of awareness, because there is always this engineering perspective somehow you know if we can make it and it’s like we also have the right to make it like when we’re looking at jurisdiction, etc then engineers are very happy and they start like building these things, but I think another question which is important to ask is like do we need this and is this the best solution for the problem that we have; or is there may be another solution to this problem; or is maybe the problem in existing then are we trying to create this problem so that we can implement AI? So it it’s really interesting.

Daly: Absolutely.

Hajri: So yeah, it’s something that’s really hard to convey the message. But yeah, and another project that I work on it focuses more on the intersection between digitalization and the art sector, and basically I’m trying to connect both worlds so link like projects that work more from a art perspectives with like ideas when it comes to AI opportunities so that they can also have somehow of an impact in a positive way.
I’m also part of focus groups that are trying to understand how to give the intersection of gender and tech, but also responsible tech a more prominent role in education curricula.

Daly: Oh wow, that’s a lot of things going on, it’s amazing. Yeah there’s definitely that question of just because we can, kind of, should we? And like also there the mention about arts as well. I think so often we see art and science as these kind of very separate things. So to actually have something that brings it together that’s that’s kind of really brilliant. I mean it sounds like you have loads of different kind of interests and areas in AI. How did you kind of get started? How did you get into AI?

Hajri: Uh, yeah, so that has been quite a long journey. It started in 2016 with my undergrad in international relations. Uhm, somehow I really. This sounds really cliche, but I really wanted to change the world. And I think that as a political scientist you know only want to change the world for your own sake, but you also want others to understand society better so that it becomes a collective burden. And so you know you’re like exposed to different viewpoints, experiences, cultures which somehow gives you like a broader field of vision on the world, and as the world is becoming more globalized and interconnected, I felt like technology’s role is becoming more and more evident and it isn’t only like helping this interconnectedness, but it’s also furthering widening like it’s widening certain gaps that are already existing within society and in my last year of my undergrad, I went on an exchange in China, Beijing and that is basically where my passion started to develop for the intersection between more human rights and technology obviously because there you’re like subjected to oppression on this technological perspective and yeah, I really wanted to understand this technical perspective as well. Like OK, how does this algorithmic architecture work? What do like what does this field mean and that’s why I did my postgraduate degree in data science. So like focusing more on objects such as machine learning, data mining but also more focusing on things such as policymaking. So how do you try to convey difficult messages and technical messages to policymakers? And it was, really how can I frame that, so it was quite a technical degree obviously, but I also felt like they really delivered it in such a way like OK data is the new gold. You guys are the data scientists, so you are going to rule the world and it somehow like didn’t focus on ethical perspectives or like OK, obviously we have to like build models we have to work with data, but what about bias? What about like why are we even using these kind of models? Aren’t there other solutions? You know the thing that I just mentioned at the beginning.
So I dedicated my thesis to understand more about like the role of AI and its intersection with accountability. Obviously now with like the GDPR, but also with the new AI act coming up and that is when I realized that I really have to learn more about this field more from an ethical perspective and that’s why I started with with this degree in AI, ethics and society at Cambridge. So it has been quite a yeah a long time. But yeah, I’m really happy in the end.

Daly: Yeah, it kind of sounds like kind of starting with that kind of well, it’s interesting to hear you kind of say about with the data science degree, there kind of wasn’t by the sounds of it, much in the way of kind of focus on that ethical side. Then it definitely seems like more and more like this creation of the course it’s definitely becoming like a really active area and a really important area as well.

Yeah yeah, obviously like there was this ethics washing, so like we had like 2 courses of like 2 weeks talking about these ethical dilemmas and obviously like. We talk about the famous trolley problem and that’s it, I know like OK we check then we check the box, ethics is done and now we go further to like the datasets and the models etc which obviously makes sense for a data science degree, but at the same time I feel like as a data scientist you have such a great responsibility within contemporary society so the least that one should expect is that you also have a profound knowledge of the ethical side and the potential negative impact it could have.
Daly: Absolutely yeah, and and as we have said, like with hopefully more people kind of taking these kind of ethical approaches what do you think might be some of the biggest changes that we see in AI, I guess in general in the next 5 to 10 years?

Hajri: So I think maybe the more far reaching and continuous mechanization of the world view that we have I feel like that world fuel view leads to the erosion of the distinction between what it means to be human and what it means to be a machine. So we’re already talking about machine consciousness, machine morality robot rights, which in my opinion is like it doesn’t make any sense whilst there are still other people who don’t even have basic human rights, it doesn’t make sense to talk about robot rights so that on the one hand, but on the other I feel like this mechanization of this worldview and there’s this steady decline in what it actually means to be as a human. I feel like we’re trying to approach everything as a machine, everything as an automata, and I really think that now would specifically like things as being metaverse, but like also other developments within this field, but it’s really going to get like, it’s only going to get worse I think. Yeah, I just can’t imagine a world where I’m standing before like in front of a judge and a robot standing next to me. Yeah, it’s just it may sound like yeah, something for in the far future, but I really think that it’s near it’s going to be near.

Daly: Yeah, I think it is interesting that it’s kind of these visions of the future with sort of very sophisticated robots, kind of working alongside us, and I wonder if there’s maybe a sense of almost like trying to rush towards that future before we’re necessarily even the technology is ready for that, or even people as well.

Hajri: And I mean everyone wants to hop on that AI train somehow, but I feel like not everyone understands what AI is with the potential impact can be like the negative impact, but also why do we want to hop on that train? Obviously it has a lot of opportunities, It makes certain things more efficient it helps us as a society, but there are also other negative impacts that I feel like are being overshadowed by the potential of AI. Which obviously is again, like further widening the gap of existing inequalities in our societies, and obviously like we see that AI targets the most marginalized people within our communities, so we really should be more critical and taking a step back and reflecting on these questions from an ethical perspective, but also from a human dignity perspective.

Daly: Absolutely yeah, it’s definitely like, say, protecting those like well, yeah, protecting those communities as much as possible due to the kind of the historic and ongoing kind of inequalities that continue to exist.

Hajri: Exactly and in the end it’s an intersectional, you know, struggle that we are facing at the moment. So it’s also like linked to climate change. It’s also linked to problems within, like other communities queer communities, religious communities so, that’s why a lot of people think that this is all interconnected with each other, but in the end it really is an intersectional struggle that we’re fighting for.

Daly: Absolutely so, a little little bit of a change of direction, we’ve kind of talked a little bit about kind of rushing towards these kind of futures. And I suspect maybe a bit of that is kind of linked to these kind of myths that kind of surround AI and all the sort of prosperity, and whatever that is going to bring. So what do you think are some of the biggest AI myths that you’ve kind of come across?

Hajri: Uhm, so I would say there may be 2, the first one is that AI is more intelligent than humans. That doesn’t make any sense, because in the end of the other one encoding these AI’s and we are the one building these AI and an AI will never reach this is this intelligence that is linked to a human being and in the end what does intelligence even mean? I mean, it’s a subjective concept and I think another myth that AI is not physical. When we talk about AI, we tend to think about this thing in the sky, you know the cloud. Most people don’t realize that it’s actually like machine power, its infrastructure, its data centers. So AI has a lot of infrastructure, but I feel like it’s demystify in the sense that people think that it’s everywhere, but at the same time nowhere which isn’t the case.

Daly: Yeah, yeah. It’s like seeing those massive kind of data centers, it’s a huge amount of physical resources that we just don’t really see.

Hajri: Exactly, yeah.

Daly: And kind of finally, well semi finally, kind of question, so you’re kind of still studying at the moment and covering all these kinds of things, so the question from the previous interviewee, Nicolo’ Brandizzi was asking about would you rather go into academia or industry?

Hajri: So at the moment I somehow like work semi in in the academia world as a researcher at the university. I’m not doing a PhD yet and I have to say that I’ve been giving this a lot of thoughts and at the moment I think that there is more research needed within this field, especially when we’re talking about ethical AI, responsible AI, but also other components that that that are connected with these terms because it’s, you know when we talk about “ethics” or “responsible” there are certain connotations where it’s such a broad field that I think there is obviously a lot of research done, but more research can be done but at the same time I feel like within politics and within the political arena, that is the place where you can have impacts, because decisions are being like does it decisions are being delegated to these people. They are the ones who are navigating our course and our directions, and I think eventually that I would really want to work within that sphere after having gained some knowledge and after maybe doing a PhD to not only be that you know like political representative for the sake of being a political representative, but also having this baggage of research and understanding the things that I’m talking about to somehow try and steer society towards a better world. But it is something that I’m still giving a lot of thought, especially now with the masters that I’m still doing and this new research role that I’m working in and at the moment I’m really enjoying all the all the literature and all the critical reflections, but I think that eventually I would want to move towards that arena.

Daly: Yeah, I kind of enjoyed how it’s often we see this kind of dichotomy of academia or industry and actually no, there’s plenty of others.

Hajri: Yeah, I mean we need the experts there. That is the problem that like experts aren’t given a stage within that arena because it began. I mean for what is research done then you know yeah, yeah.

Daly: Yeah, that’s actually a really sort of good point of like yeah for what purpose are we going to do in research, unless we can kind of implement it in some way. And I guess, what would you like to ask the next interviewee in the series.

Hajri: I’m really curious what their major implication is within this field, and especially when trying to convey the message of why responsible and ethical AI is important. I feel like that is something that I still struggle with, and because a lot of people want to do AI as I said, a lot of people want to jump on that train, but when it comes to ethics and the responsible part, it just like sticks to that checklist and that’s it, so I’m really curious how the next person tries to engage with these types of questions and tries to convey the message in such a way that it convinces the other party.

Daly: That’s a great question and can be really intrigued to see how they respond.
Hajri: Very value laden.

Daly: And very finally, where can kind of people find out a little bit more about some of your works and the things you’re working.

Hajri: Yeah, yeah, so I am on Twitter and also recently my friend and I we started a platform called AI Better World where we try to provide people with more knowledge about AI, especially to try and demystify AI, to a certain extent. And we do that with video podcasts and we also provide literature recommendations or commentaries, books etc. And so yeah, feel free to hit the like button. It’s called AI better world.

Daly: Brilliant, and like I said, we always have links to everything down below, so if you want to check any of that out we will have that ready and waiting. Yeah thank you so so much for joining us today.

Hajri: Thank you.

Daly: And finally, thank you for joining us today. If you would like to find out more about the series or anything mentioned here, do join us on aihub.org and until next time goodbye for now.



transcript



tags:


Joe Daly Engagement Manager for AIhub
Joe Daly Engagement Manager for AIhub




            AIhub is supported by:


Related posts :



The Turing Lectures: Can we trust AI? – with Abeba Birhane

Abeba covers biases in data, the downstream impact on AI systems and our daily lives, how researchers are tackling the problem, and more.
21 November 2024, by

Dynamic faceted search: from haystack to highlight

The authors develop and compare three distinct methods for dynamic facet generation (DFG).
20 November 2024, by , and

Identification of hazardous areas for priority landmine clearance: AI for humanitarian mine action

In close collaboration with the UN and local NGOs, we co-develop an interpretable predictive tool to identify hazardous clusters of landmines.
19 November 2024, by

On the Road to Gundag(AI): Ensuring rural communities benefit from the AI revolution

We need to help regional small businesses benefit from AI while avoiding the harmful aspects.
18 November 2024, by

Making it easier to verify an AI model’s responses

By allowing users to clearly see data referenced by a large language model, this tool speeds manual validation to help users spot AI errors.
15 November 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association