ΑΙhub.org
 

Interview with Frida Hartman: Studying bias in AI-based recruitment tools


by
02 December 2025



share this:

Frida Hartman. Photo credit: Svenska kulturfonden / Frida Lönnroos.

In a new series of interviews, we’re meeting some of the PhD students that were selected to take part in the Doctoral Consortium at the European Conference on Artificial Intelligence (ECAI-2025). In the second interview of the series, we caught up with Frida Hartman to find out how her PhD is going so far, and plans for the next steps in her investigations. Frida, along with co-authors Mario Mirabile and Michele Dusi, was also the winner of the ECAI-2025 Diversity & Inclusion Competition, for work entitled “The Last 25 Years of Gender Distribution of Authorship in ECAI Proceedings”. This award was presented at the closing ceremony of the conference.

Could start by giving us a quick introduction to yourself and the topic that you’re working on?

My name is Frida Hartman and I’m researching bias in AI-based recruitment tools. For the first part of my PhD, I have been working on a systematic literature review on how we detect bias in different platforms. The next part of my PhD will research how much we are using AI within recruitment contexts. There’s been some research on the topic but not as much in the Nordics, specifically, and I’m curious to know to what extent AI is used in this context and what kind of AI is used – it’s probably LLMs [large language models], but I’m curious to see if there’s something else. I’m also interested in the reason recruiters are using AI. Aside from efficiency, I think some recruiters also believe that AI might be more fair or objective than human recruiters. I mean, there is bias from human recruiters as well, so there might actually be something there that AI could help with.

How did you go about carrying out the literature review?

Well, to start with, I wanted to just cover how we detect bias in AI, but that turned out to be a huge topic and it’s really difficult to cover everything. And so I focused more on articles that are critiquing the current technical approaches that we have. At the moment we’re using a lot of fairness metrics to detect if there’s some kind of bias going on. But fairness metrics can lack robustness, they shift from time to time when you use them, and there’s no way to know which tool to use when. So anyone can use anything and claim that it’s fair just because they’ve run it through a fairness metric system. And so the systematic literature review is now on articles that are proposing a different solution or that are questioning and critiquing the current solutions.

Was there anything that particularly stood out to you whilst carrying out the literature review?

There were articles discussing a concept called fairness hacking, and I found that aspect was really interesting. Fairness hacking is where developers use a particular fairness metric and run their model through it and claim that it’s fair. And this can also be used maliciously. So if you don’t use this fairness metric with the best of intentions, then you can create a very unfair model and either choose from different metrics, or choose one metric and run it multiple times because there’s an issue with robustness. So you can run it multiple times and just choose the run that showed the most fairness and go with that.

What’s the next step for your PhD – what are you going to start working on next?

I’m going to reach out to recruitment companies in the Nordics. I need to focus the research somewhere, so the Nordics for me, since I live there, is an easy way to do that. I’m going to narrow it down to recruitment companies because those are the ones focusing on recruitment, rather than companies who are also recruiting. And so the plan is to try to map out the situation of how much AI is being used. I heard from a Swedish professor who had done something similar, and he had been asking companies if they were using AI in their recruitment tools. And some of them said they weren’t, but then he asked them to describe the tool that they were using, and it turns out there was a lot of AI in it. So there are some companies that don’t know what they are using, because the technologies have progressed so quickly.

I wonder what percentage of companies are using AI tools for recruitment.

Yeah, I’m super curious to know. And, this is just speculation, but I think a lot of companies are also using LLMs to write their recruitment posts. And probably a lot of people looking for jobs are writing resumes with LLMs. So this becomes a very interesting discussion of whether it is AIs employing AIs in a sense.

Really, why I am interested in the recruitment context of it all is because it’s not as black and white as in many other decision-making scenarios where I don’t think AI should be used. But in recruitment, there is the issue of having very biased human recruiters. I don’t know if that’s being solved by AI yet, but I think there’s something there that we need to research and understand; how much, and which aspects, could be solved by AI. And if AI can’t help with these problems, then maybe we should discard it for now and come back to it later when models have improved.

Have you got any idea what the biggest challenges might be for the next steps of your PhD?

So, I’m planning to send out surveys to recruitment companies and I think one major challenge will be getting them to answer. I’m planning on having surveys because it’s a lower threshold to get answers instead of interviews which would take more company time to answer. There is no real incentive for them to respond, other than knowing that they are contributing to science. I mean, a lot of science is collecting data, and that’s one of the major challenges for me in this work.

Could you talk about the interdisciplinary nature of your work, and was this something that particularly interested you about the topic?

It’s super interesting doing something interdisciplinary. I really enjoy it because I have a very technical background. I’ve done some courses in topics like feminism and social science, and that’s what got me interested in this interdisciplinary perspective. That’s why I wanted to pursue this type of research looking at technologies in society and how they can help or how they are harmful.

I’ve got two PhD supervisors; one is from the more technical side and one from social sciences. And so it’s been really interesting to have these two perspectives when I do my research. I really enjoy understanding this concept from a societal perspective, that it’s not just a technology that you can remove from society.

I think there are some things in which the technical community is lagging behind the research that is coming out from social sciences. Like we’re still talking about fairness and equality in technical sciences, but in social sciences we’ve moved beyond that and are talking about equity and justice. Computer scientists are often concerned about their technologies complying with the law, but the law isn’t flawless and so we need to focus on also making models that go beyond legal requirements and make more equitable choices.

How did you find the doctoral consortium experience at ECAI?

I think it was really good. The most interesting thing, and with ECAI generally, is that there are so many different people, with so many different backgrounds and doing so many different things in AI. And so hearing about these different things and finding the small sub-community where my research fits in, and finding someone who’s also researching bias in AI, was really interesting. So I got a lot of nice discussions from that. And we also get to have lunch with a EurAI Fellow, where you get to meet someone who is in the same field as you, who’s a lot further in their career and ask questions. So that’s something I’m looking forward to.

Congratulations on winning the Diversity and Inclusion competition award at ECAI! Could you tell us about the work that won the award and give us a bit of background about the competition?

Thank you! The competition was presented some time before the pre-conference and was open for all participants at the Doctoral Consortium. It was optional to participate, and we could freely form teams and then create and present an artefact that dealt with the theme Diversity and Inclusion at ECAI or the broader AI community. The time we had to work on this was only a couple of days and we chose to look at the gender distribution of authors that have published work at previous ECAI conferences. Thanks to a previous project I’ve been working on, we had access to a gender estimation tool and could run the author names through that. We could see that during the last 25 years of ECAI the proportion of women has increased, which is nice to see. We also used a simple linear regression model and estimated that the gender gap would close the year 2089. But this is probably not entirely accurate since there are a lot of other factors that we haven’t taken into consideration in our simple model. It was fun working on this side project with Mario and Michele during the conference, and we’re happy our effort paid off.

Final question, could you tell us an interesting (non AI-related) fact about you?

I practice HEMA (historical European martial arts), and in my case this consists mostly of longsword fencing. That’s something I got into two or three years ago and I really, really enjoy it. Although it’s different from “regular” fencing in some ways, the main thing is still being able to read your opponent to be able to counter their move. And what I really enjoy with it is actually that you can’t think about anything else. You have to be really present to be able to be successful, you have to put your whole body and mind into this whole thing, so it’s kind of a mindfulness thing.

About Frida Hartman

I am a PhD candidate at the University of Helsinki in Finland. My research is focused on bias in AI based recruitment tools. I want to bridge the gap between computer science and social science in terms of AI research. I believe AI models are sociotechnical systems and we need to find methods beyond the technical sphere to solve issues of AI bias.



tags: ,


Lucy Smith is Senior Managing Editor for AIhub.
Lucy Smith is Senior Managing Editor for AIhub.




            AIhub is supported by:



Related posts :



Forthcoming machine learning and AI seminars: December 2025 edition

  01 Dec 2025
A list of free-to-attend AI-related seminars that are scheduled to take place between 1 December 2025 and 31 January 2026.
monthly digest

AIhub monthly digest: November 2025 – learning robust controllers, trust in multi-agent systems, and a new fairness evaluation dataset

  28 Nov 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

EU proposal to delay parts of its AI Act signal a policy shift that prioritises big tech over fairness

  27 Nov 2025
The EC has proposed delaying parts of the act until 2027 following intense pressure from tech companies and the Trump administration.

Better images of AI on book covers

  25 Nov 2025
We share insights from Chrissi Nerantzi on the decisions behind the cover of the open-sourced book ‘Learning with AI’, and reflect on the significance of book covers.

What is AI poisoning? A computer scientist explains

  24 Nov 2025
Poisoning is a growing problem in the world of AI – in particular, for large language models.

New AI technique sounding out audio deepfakes

  21 Nov 2025
Researchers discover a smarter way to detect audio deepfakes that is more accurate and adaptable to keep pace with evolving threats.

Learning robust controllers that work across many partially observable environments

  20 Nov 2025
Exploring designing controllers that perform reliably even when the environment may not be precisely known.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence