ΑΙhub.org
 

New voices in AI: Maria De-Arteaga

by
20 April 2022



share this:

Welcome to episode 4 of New voices in AI.

In this episode Maria De-Arteaga shares her work and journey into algorithmic fairness and human algorithm collaboration.

You can find out more on Maria’s website and follow her on Twitter, @mariadearteaga.

See also the paper mentioned in the interview here

All episodes of New voices in AI are available here

Transcript

Daly: Hello and welcome to New Voices in AI the series from Ai hub where we celebrate the voices of Masters and PhD students, early career researchers and those with a new perspective on AI. I am Joe Daly, engagement manager for AI hub and this week I am talking to Maria De-Arteaga about some of her research. And without any further ado, let’s begin.
First up, big welcome to our 3rd guest, nope sorry 4th guest to new voices in AI. If you could possibly introduce yourself and where you are?

De-Arteaga: thank you so much for having me. I am Maria De-Arteaga. I am Colombian and I moved to the US to do my PhD at CMU where I got a joint PhD in machine learning and public policy, and now I am an Assistant Professor at UT Austin in the Information, Risk and Operations Management Department, where I am also a core faculty member of the Machine Learning Laboratory.

Daly: Nice and what kind of things are you working on currently?

De-Arteaga: My research focuses on human centered machine learning, and within that there’s two main streams I would say that I work on. The first one is algorithmic fairness, the second one is human algorithm collaboration. I can tell you as much or as little as you want about all these things.

Daly: I guess yeah, if you want to just expand like a little bit more on what that means, that would be great.

De-Arteaga: Yeah, absolutely yeah. So on the algorithmic fairness side, a lot of my work has focused on characterizing the risk of bias. So when we say we worry about being unfair, what does that mean? What are the types of harms that may arise? And in that context, I’ve worked on characterizing risks of bias, so some of my earlier work was looking at the risks of compounding imbalances and compounding injustices. On headlines we often see that machine learning may replicate the biases in the world; in that work what we show is that it’s not just about replicating them, but it’s also about compounding them. Some of my most recent work along the lines of characterizing harms is work that I’m very excited about with collaborators from Microsoft Research looking at what we term ‘social norm bias’. There we are looking at some of the residual unfairness that remains when we apply algorithms to mitigate bias counts. In that case what we find is that while you may be mitigating the bias that you measure at a group level, you may still be penalizing folks for adhering to stereotypes of a group that is disadvantaged. And so the algorithm may look like at a group level is doing fine, but who are the individuals that are benefited or harmed? This is still associated to stereotypes and previous injustices.

Daly: That makes a lot of sense. It’s definitely that’s can be such a complex issue to kind of unpack and separate the global to the specific.

De-Arteaga: Exactly exactly, and so that’s what a lot of my work on algorithmic fairness does, and then from there I also work on ‘OK, now that we understand the harms, where should we go from there?’ And so in some cases that may be designing algorithms to mitigate those risks. In some other cases, it’s recommendations such as ‘this is not a type of algorithm that we should use to inform these type of decisions’. So sometimes the solution is more technical and in other cases, the solution is more at a policy recommendation level.

Daly: And in terms of, I mean, this is a big challenge, but in I guess more general terms, what are some of the challenges you think there are for AI?

De-Arteaga: So I think that at a high level the biggest challenge that we have in AI I think is the lack of sobriety and specificity that sometimes exists about what the technologies are actually doing. I think that’s a big risk in terms of how the technologies can be misused, how the technologies may be misunderstood, and so I think that we have a big responsibility of communicating very clearly what it is that the technologies that we develop can do and I don’t think that we’re living up to that responsibility. I think it is too often the case that the kind of AI myths and misunderstandings that we see when we look at the general media–It’s not a situation in which things got lost in translation. It is a situation in which you can trace them back to claims and mischaracterizations and exaggerations that are being done by the AI community, either on the research side or on the industry development side.

Daly: Yeah, for sure the I think, well certainly one of AI hub’s kind of main goals is kind of trying to communicate a little bit more, I guess effectively and maybe not truthfully, ’cause I don’t think necessarily people are intending to be untruthful in their sort of science communication, but it’s that is for sure, like a huge huge challenge. And actually, this kind of links a little bit, so I guess part of the kind of exaggeration can come from excitement about AI. So what perhaps some of the things that do excite you about AI without sneaking into hyperbole?

De-Arteaga: Yeah, yeah, absolutely. So first of all, I think I totally agree with you that I think a lot of that exaggeration I think comes from mixing our hopes with what we’ve achieved, and so I think that when we start out with these ambitious goals, and then I think that there’s this extra step that needs to happen that I don’t think in AI we have necessarily developed like very clear guidelines around how to do this, of clearly communicating what is that we achieved. So in healthcare you can imagine you start with very ambitious goals, but you don’t end up getting out a drug and you say, well, this cures cancer because that was your initial hope, but actually it does something that is much more specific and not cure cancer yet. And so I think that that’s where we need to in AI develop that of bridging the gap between what is it that we hope these will do and what it is that it actually does to communicate it very clearly.

Yeah, but anyways that is not what you asked me and you asked me about what it is that I am excited about. And so I think that what I’m most excited about and this takes me a little bit into how is it that I got into this field? I did my undergrad in math, and I worked as an investigative journalist at the same time and so that led to a lot of time spent just trying to look at anomalies in an Excel sheet and trying to understand where is it, is it the case that the money that they claim was going to be spent on a highway was actually spent there, and so that mix of settings was what led me to start getting into data science and machine learning because doing an undergrad in math, I was like “well there has to be a better way of discovering these patterns than me staring at these Excel sheet”. and I think that I’m still most excited about the possibility and the ability to find patterns in the data and leverage them. I think initially I came at it from the investigative journalism perspective, and right now more focused on the decision support context. So how can we assist experts in decision making and yeah, that’s what I’m most excited about. It’s not super sci-fi or anything, but I think that it is where there’s both a lot of opportunities and a lot of hard questions in terms of how is it that we actually successfully integrate machine learning for decision support in a way that it is leading to better decision making.

Daly: And this actually leads on really nicely, so you talked a little bit about sort of getting help with decision making, and I guess what are some of the implications that you’re hoping to come out of your research and sort of what makes that kind of interesting?

De-Arteaga: Yeah, so I talked a little bit earlier about my work on algorithmic fairness. The other side of my research is looking at how do we effectively use machine learning for decision support, and that has different components. One portion of it is understanding how is it that humans integrate recommendations into their decision making. And so for example, in one of our recent papers in this in this space that was published at CHI now two years ago (we’re already in 2022, I’ve lost track of time), and in that paper we looked at a deployment in the child welfare setting where they deployed a tool for decision support and we looked at how is it that call workers were making use of recommendations and in particular there’s a glitch during deployment that went unnoticed that allowed us to study were they indiscriminately adhering to recommendations, or were they overriding mis-estimated recommendations most often, and in the follow up of that work that we’re completing now, we’re also looking at fairness questions. That is–when you look at the algorithm by itself and you compare it to the human-algorithm collaboration, how do disparities compare? And what we find in this case is that actually even though the humans by themselves exhibited a higher screening rate for black children, and so you would expect well if the algorithm exhibits that, and the human exhibits that, combining them should be the worst of all worlds. But we find that that’s not the case, and actually the human in the loop mitigates disparities with respect to the algorithm by itself. And so these questions of understanding how is it that humans are making use of recommendations and then from there, how should we be designing machine learning algorithms that better assist humans in their decisions. And on that side have done more kind of technical work on learning to defer and learning who you should defer to, and I think I’m very excited about the possibility of finding ways in which machine learning can improve the quality and fairness of expert decision making.

Daly: There’s some really, really fascinating findings that the sort of humans mitigating, and so, I guess, maybe consciously kind of mitigating these things. And there’s I imagine a huge amount of kind of psychology that goes into this as well.

De-Arteaga: Yeah, so there we were able to look at well why is that happening right? Is it that they’re treating scores differently conditioned on race, for example. And we find that that’s actually not the case, what happens is that the experts are relying on other information, other than the risk score alone, and so when they integrate these external sources of information, that is what leads to these mitigating the disparities.

Daly: That makes a lot of sense. Yeah, and so I guess what’s kind of next and what would you hope to work on next.

De-Arteaga: Yeah, one of the lines of work that I’m very very excited about is looking at the intersection of diversity and machine learning. So we had a tutorial at FAccT last year and we have a paper coming out and now that the preprint’s already out, but the journal publication will be out soon, looking at when we think about diversity in the machine learning pipeline, how should we be thinking about it, right? What do we mean by diversity? What are the pathways through which it can have an impact? And so in that work we brought this work joint with Sina Fazelpour and we brought frameworks from philosophy of science and also built on the literature on organizational science and sociology and looked at OK, how should we be thinking about diversity in machine learning in a way that builds on the huge body of work and knowledge that has studied diversity before? And so this is a line of research that that I’m very excited about this is the 1st work that that has come out of it but there is there’s a lot of work that I’m working on also, at the intersection with these two elements that I mentioned before of algorithmic fairness and human AI collaboration.

Daly: It’s all very interesting and very exciting, and it’s very important, like for sure. And sort of penultimate question from our previous interviewee who asked: what aspect of your daily life inspire your AI research?

De-Arteaga: Oh, that’s that’s a good question, this is a very good question now. I think one of the elements that is very core to my research is the gap between what is it that we think we’re asking the algorithm and what is that the algorithm is actually answering; meaning we, and in how decision support tools are often framed is “well these algorithm is giving you the probability that a candidate will succeed at this job” when that is absolutely not the question that it is answering. It is answering the question “based on the previous data which of these candidates would have been most likely to be hired?” And then when you frame the question like that you’re like well, I can see all the things all the other things that these predictions going to be telling me, right? And I think that gap between the questions that we often start from and the questions that are actually being answered I think is very, I think is one that is very informed by daily experiences. Observations just kind of inhabiting our world and then understanding the gaps between the questions that we wish we were answering and the questions that we’re actually answering when we’re identifying patterns in data.

Daly: That makes a lot of sense. Yeah, you can quite often, and I certainly know from when doing sort of PhD research you think you’re doing one thing, and then when you’re actually looking at your measures, you’re looking at something completely different. And I think, yeah, for something like this, area of research is really, really important to know what you’re actually aiming for.

De-Arteaga: Yeah exactly and so yeah, I think that for example, I think one concrete. So one of the of our recent works was on, uh, predictive policing and looking at bias in predictions when you’re using victim data to train your models. So basically there has been earlier work that looked at how if you use policing data then you’re going to be replicating the bias from police practices and a lot of the developers of the system said, well, yeah, absolutely, but do not worry that’s not what we’re using, we’re using victim reports, so it’s not police generated data it’s victim generated data, so you don’t have to worry about that. And there from my own experience of trying to report being robbed to the police and really failing miserably at those attempts, that informed my thinking around well, what are the gaps that may exist when relying on victim data, and based on that so that was looking at the Bogota case where they’re also deploying predictive policing systems. And based on that we started digging up the data that we could use to study these in a systematic manner, and then from all of these then the research came out, but I think that’s an instance in which kind of the research intuition and the research question was very grounded on lived experiences, where the claims that were being made and my actual experience trying to report crimes I was like, well, this is absolutely not how I would expect these to pan out.
Daly: Yeah, I can definitely imagine how that those could be quite different things. And and sort of your well, I said penultimate question previously, actual penultimate question is what would your question be for the next interviewee in the series?

De-Arteaga: I think my question would be what would they like to see changing about the culture in the AI community?

Daly: That’s a really, that’s a really interesting question I’m going to be intrigued to see to see what they come up with. And very finally, if people want to keep up with your work and find out a little bit more and where can where can they find you online?

De-Arteaga: Yeah, so the two main places I’m on Twitter @MariaDeArteaga and my website (https://mariadearteaga.com/), so I think those are the two main places where folks can find me and my website, I imagine that the link will be there, but it’s also my name, Maria De-Arteaga dot com. Both of them are pretty straightforward.

Daly: Excellent yeah, and yes we will have both the Twitter and the website linked on AI Hub and yeah finally thank you so much for joining us for this episode. Excellent, excellent answers and really interesting research. And yeah, thank you.

De-Arteaga: Thank you so much for the invitation, it’s really a pleasure to be here.

Daly: And finally thank you for joining us today. If you would like to find out more about the series or anything mentioned here, do join us on AIhub.org and until next time, goodbye for now.



tags:


Joe Daly Engagement Manager for AIhub
Joe Daly Engagement Manager for AIhub




            AIhub is supported by:


Related posts :



The Turing Lectures: Can we trust AI? – with Abeba Birhane

Abeba covers biases in data, the downstream impact on AI systems and our daily lives, how researchers are tackling the problem, and more.
21 November 2024, by

Dynamic faceted search: from haystack to highlight

The authors develop and compare three distinct methods for dynamic facet generation (DFG).
20 November 2024, by , and

Identification of hazardous areas for priority landmine clearance: AI for humanitarian mine action

In close collaboration with the UN and local NGOs, we co-develop an interpretable predictive tool to identify hazardous clusters of landmines.
19 November 2024, by

On the Road to Gundag(AI): Ensuring rural communities benefit from the AI revolution

We need to help regional small businesses benefit from AI while avoiding the harmful aspects.
18 November 2024, by

Making it easier to verify an AI model’s responses

By allowing users to clearly see data referenced by a large language model, this tool speeds manual validation to help users spot AI errors.
15 November 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association