ΑΙhub.org
 

Datalike: Interview with Mariza Ferro

by , and
27 March 2024



share this:

Mariza Ferro is a professor at the Federal Fluminense University and a visiting professor at Bordeaux University. She has been working in the field of AI since 2002. She works on AI for good, including human-centric AI, ethical and trustworthy AI, green and sustainable AI, and AI for sustainable development goals. She guides her research based on the principle that AI must benefit humankind. Furthermore, she is also working with public outreach by making science available for all. She is part of the Brazilian consortium of AI researchers, head of the Consortium of Ethics for Public Policies on Artificial Intelligence for Latin America and the Caribbean, and the creator and head of the ethical and Trustworthy AI group in Rio de Janeiro.

AI has gained a lot of attraction in recent years, but before the current AI renaissance, the field must have been quite different. How was the field twenty, or fifteen years ago compared to now? How has this impacted your work?

Twenty years ago, it was much more difficult to work with AI. Computational resources were difficult to access and also expensive. For those who were working with machine learning (ML), getting datasets for training was very difficult. When we had the data to train the algorithms, we would not have computational capacity available to store or process them.

In the first research laboratory in which I worked, the space available for storing my datasets was less than a 1GB usb stick (I know that this is outdated – my students laugh when I say about usb stick with 1GB of capacity) that we carry in our pocket. And the processing capacity was much less than that of a simple smartphone today. Training a very simple neural network would take many weeks.

However, many of the theoretical bases for what we see today were already being developed at that time or even long before that; many of the techniques I worked on are still used successfully today. What has really changed is how easy it is to access data, available computational capacity, and easy access to free tools and libraries. Furthermore, AI results are accessible to the public and no longer restricted to research centers.

Moreover, today, people have no doubt that AI can be useful to solve many challenges in our daily lives. Approximately 15 years ago, this was not the reality. This impacted my work, as having a master’s degree or even years of experience in the field did not bring much prominence. At a certain point in my career, I needed to create new specialties; I started to work on high-performance computing (HPC) and computational modeling, since AI was not interesting for many stakeholders. It was difficult for me, but at the same time made me a more multi- and interdisciplinary researcher and with the knowledge and skills to understand the impact of AI in computation, energy consumption and especially to propose new ways to achieve a more sustainable AI.

One thing that has seen little change over the years is how women are seen in this role as AI researchers. Since I started in the field, in my an undergrad scholarship, work on AI was seen as something “suitable” for women in computing, as people said AI was not something very technical. Today I still hear that working with AI is not a hard science, it is not that technical area.

Can you elaborate more on where these comments come from? We have found them so surprising.

Perhaps I never elaborated deeply on these lines, which I still hear quite frequently, and I never felt stimulated (or encouraged) to question whoever said this about the real content behind it.

However, thinking about the period when I was still an undergraduate student, these comments came from some colleagues and other students in the Computer Science bachelor course. AI was seen by them as something easier, with a lot of theory and little programming, coding and so on. And you know, even today STEM is not seen as something for girls (obviously, I absolutely disagree with that). I believe that this demonstrated a lack of knowledge about the AI area, which was seen superficially during the undergraduate, and there was no knowledge about the complexity of the area and its countless theories and sub-areas. When talking about ML models in particular, the mathematical basis behind them can be highly complex and require a lot of knowledge in linear algebra, calculus and statistics. Today, in my ML courses, my students see all these mathematical models before applying them in any domain. I believe that the complexity of these ML models is already clear today, especially with the deep learning hype.

However, I keep hearing this today from colleagues, especially since I started working in the ethics field. Again, working with ethics applied to AI is said to be “non-technical”. I suspect that professionals in the STEM areas have difficulty understanding qualitative research and maybe they see that only quantitative research is “hard science” and that which really requires technical background. It is difficult to understand.

Do you have any advice on how to handle these not-so-helpful commentaries?

When I was an undergraduate student in 1995, we did not have the openness we have today about this subject (women or girls in STEM) and I saw it as if “it was normal”. I lived through difficult times. One day, after a very bad grade in electronic circuits, a male professor told me “that I should drop out of the course”. I cried a lot that day and locked myself in the university bathroom. But I told myself I was capable and moved on (just to register I had a 100% grade in the next test). When I started working with AI at the end of my degree, I didn’t really care about the comments that AI was for women, as I was already so passionate about research and the field that I did not even think of it as something bad.

In fact, let me confess to you that only after many years did I become aware of the differences between being a woman or a man in the STEM area. That professor wouldn’t have said that comment if I were a man, because there were boys who got grades as bad as mine and they weren’t encouraged to give up

Today I have much more maturity, but sometimes it is still difficult to hear these kinds of comments and elaborate it internally. However, the certainty of my goals and dreams for a better world with my research is stronger than any of that. Another point that helps a lot in this type of situation are the groups that exist all over the world, such as Women in Science, Women in AI, Meninas Digitais (Brazil), Feminists in AI and movements like this one, with this series of interviews, and many others that allow you to get in touch with the subject.

Knowing other stories, participating in events with lectures on this topic or these groups I mentioned, being aware of impostor syndrome and, if necessary, seeking professional help is what I recommend for women today. They help a lot to not have our strength affected.

There are many successful milestones in your career path: getting a PhD, creating a research group, becoming a university professor, and awards at relevant conferences. Are there any particular milestones that are more meaningful to you? Was there ever a turning point?

Certainly becoming a university professor was one of the most relevant milestones for me and now being a visiting professor in Bordeaux as well. These have always been a dream in my life. Communicating ideas, teaching, and learning are very important to me.

However, one of the milestones that I saw as a turning point in my career was my postdoctoral stage in a project in partnership between Brazil and Europe, part of the Horizon 2020 program. This stage changed my view of research, I created a great network and when the postdoctoral stage was finishing it was the time when I began my return to AI and when I began to concern myself with ethical AI and regulations.

What are the projects you will be focusing on in the future?

No doubt I will continue focusing my research on AI for social good, to change the world in a positive way, making it a more inclusive, equitable and fair place for all.

In the near future I will be focusing my research on AI for sustainability with the project “AI for extreme weather prediction in urban areas”.

Also, in sustainable AI, investigating ways to make AI and HPC greener. I hope for this research I will continue the joint work with Inria center – Université Bordeaux and strengthen this collaboration with Brazil.

Finally, I would like to in the near future go further with the projects for AI literacy in Brazil, for the students and professors of the public schools.




Ndane Ndazhaga is a Data Scientist who loves using data to improve businesses and help make decisions.
Ndane Ndazhaga is a Data Scientist who loves using data to improve businesses and help make decisions.

Isabella Bicalho-Frazeto is an all-things machine learning person who advocates for democratizing machine learning.
Isabella Bicalho-Frazeto is an all-things machine learning person who advocates for democratizing machine learning.

Datalike




            AIhub is supported by:


Related posts :



The Turing Lectures: Can we trust AI? – with Abeba Birhane

Abeba covers biases in data, the downstream impact on AI systems and our daily lives, how researchers are tackling the problem, and more.
21 November 2024, by

Dynamic faceted search: from haystack to highlight

The authors develop and compare three distinct methods for dynamic facet generation (DFG).
20 November 2024, by , and

Identification of hazardous areas for priority landmine clearance: AI for humanitarian mine action

In close collaboration with the UN and local NGOs, we co-develop an interpretable predictive tool to identify hazardous clusters of landmines.
19 November 2024, by

On the Road to Gundag(AI): Ensuring rural communities benefit from the AI revolution

We need to help regional small businesses benefit from AI while avoiding the harmful aspects.
18 November 2024, by

Making it easier to verify an AI model’s responses

By allowing users to clearly see data referenced by a large language model, this tool speeds manual validation to help users spot AI errors.
15 November 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association