ΑΙhub.org
 

Interview with Francesca Rossi – talking sustainable development goals, AI regulation, and AI ethics

by
28 March 2024



share this:

At the International Joint Conference on Artificial Intelligence (IJCAI) I was lucky enough to catch up with Francesca Rossi, IBM fellow and AI Ethics Global Leader, and President of AAAI. There were so many questions I wanted to ask, and we covered some pressing topics in AI today.

Andrea Rafai: My first question concerns the UN Sustainable Development Goals (SDGs). It seems that there is a lot of potential for using AI in helping to work towards the 17 goals. What is your view on these goals and the long-term outlook? Can these goals be accomplished by 2030? What types of initiatives are currently ongoing with relation to the goals?

Francesca Rossi: I’m not sure if it’s possible to reach all of the sustainable development goals by the year 2030, because there are so many factors that need to be taken into account. For example, technology can be of assistance, but it also has the potential to impact these objectives in a number of different ways. Additionally, political willpower is required in order to create and implement policies that will help achieve the targets. However, there are already many efforts using AI to help achieve or get closer to the sustainable development goals.

In my line of work, I find these Sustainable Development Goals helpful, because they give an idea of how or what the role of AI research should be, which is not just to further AI because it’s scientifically interesting to advance, but also because it can assist humanity to help to move in the direction that is shown in this picture of tomorrow given by the SDGs.

At IJCAI there was a specific track on artificial intelligence for social good, which links AI initiatives to the SDGs. In 2022, I co-chaired the first edition of this same track. This is a very nice way for these societal themes to be included in such a large conference on AI, which is mostly technical. To that end, I’m also co-sharing a project with the Global Partnership on AI, which is geared at assisting AI teams that are working in different parts of the globe in improving and scaling the responsible AI dimensions of their project. This includes also working towards some of the Sustainable Development Goals. Whether it’s dealing with deforestation, an initiative about healthcare in developing countries, or conducting some analysis of technology that is being utilized to ensure that it is consistent with certain responsible AI ideals, there are a lot of things that need to be done. There are several projects, but I think the key purpose is to provide a method to be proactive with AI, not only to repair technological concerns. It is not enough to simply say “Oh okay, I see some issues with the technology, let’s put a patch on it, so that I mitigate that issue” or rather simply react when problems arise and applying a “patch” to fix them. I think that’s a bit too quick. It can ameliorate a problem, but the sustainable development objectives empower and encourage technology users to take a more proactive approach.

Andrea: I’m aware that there aren’t many legally enforceable laws in the field of artificial intelligence. There are policies and regulations, but they aren’t formally a part of the sustainable objectives. To put it another way, an organization isn’t strictly liable for supporting these kinds of developments.

Francesca: Regulation is obviously essential and there are constantly new rules being proposed, such as those in Europe, United States, or in Asia. However, I do not believe that regulation will fix all of the problems. First and foremost, it is extremely slow compared to the pace of the technology. The rate at which technology advances is significantly faster than the pace at which actual laws are enacted by the government, leading to a jigsaw puzzle conundrum of distinct and varied measures to ensure that technology is created and applied properly. Therefore, we need governments to put regulations in place, but we also need businesses to create internal rules to ensure that the technology they produce, utilize, and provide to their clients is built correctly and used correctly. Standards, associations to develop internationally recognized, shared and agreed-upon policies for businesses, certifications, audits and many other elements all play an important supporting role. Although I believe that regulation is one of them and a very essential one, the problems will not be solved only by it.

Andrea: What role can AI play in mitigating the effects of global warming?

Francesca: Climate research can definitely benefit from AI, because understanding what to do about climate change needs a deep analysis of a lot of data, and AI can help in doing that. However, the newest AI systems, such as ChatGPT and others, use a lot of energy in the training phase, which in the future may have an impact on the resources of our planet. Because of this, many AI researchers are trying to understand how to build systems that have similar performance but with less energy consumption. Therefore, there are two sides to this: the environmental impact of training large models, versus the potential to better recognizing and responding to climate change challenges.

Andrea: Do you think that the COVID pandemic, where many of us were confined to our homes, accelerated developments in AI? I’ve been to sessions at the conference, but this topic has not been brought up with respect to how AI progressed during that time.

Francesca: The two years of the COVID experience accelerated the digitalization of society, as we had to use technology to connect with each other for work, students to continue their school activities, and so on. Really everybody, even people that usually were not using technology so much, began to do so. This means that digital technology has permeated every aspect of modern life. Therefore, now that we are able to go back to doing things in person, such as here at the conference, we may do so; however, we will do it with the awareness that there are a great many more things that we are able to accomplish with the assistance of technology. However, although digital methods of connection can have their advantages, I believe that doing so to the exclusion of human experience in person would ultimately be counterproductive to society as a whole and the creativity it entails. Thus, there is still a lot of room for debate on the most effective strategy to integrate these two forms of interaction and collaboration.

Andrea: Could you talk a bit about the concept of fairness related to AI?

Francesca: Fairness is a very important value: everybody should be treated equally and have the same opportunities. So, we need to make sure that AI systems also know just as much how essential it actually is and behave accordingly. There are many definitions of fairness and certain definitions may be more suitable in a given scenario than others, which makes it challenging since it is not just a technical issue but a socio-technical one. This means that, in order to identify the concept of fairness to embed into a machine, so that it makes judgements that are fair in that sense, a large number of stakeholders must be consulted. This is a crucial point. Fairness brings about all these aspects that immediately demonstrate that developing technology cannot be separated from consulting with the communities that it will affect, as we must comprehend what it means to be fair and just in that specific community in that specific decision scenario, in order to build balanced decision-making systems. Technology and AI in particular are no longer just a scientific field; they are also a social scientific field. This is because you need to know what values you want to instill in the technology along with people who don’t know much about technology but will be affected by it.

Andrea: Moving on to talk about AAAI, I was wondering about the recent policy on the use of generative AI tools in publications?

Francesca: AAAI is the global association of AI researchers, that currently I’m the president of. We recently released and published a policy stating that AI generative AI tools, such as ChatGPT or others, cannot be authors on a paper. They can however be used as tools, and authors should declare if they use them to write their paper, whether it’s for technical terms or facts. However, they can’t be authors on the paper with other people because, for example, they can’t be held responsible for everything that is written in the paper, as an author should be. Therefore, AAAI, as well as other associations and publishing groups have chosen not to allow AI tools or AI systems to be authors in their publications.

Andrea: Could you talk about an interesting project that you’re involved with at the moment?

Francesca: The primary AI project I’m leading at the moment is one where I collaborate with colleagues from IBM, the company where I work, as well as researchers from several universities. In this project, we attempt to draw inspiration from cognitive theories of how humans make decisions in order to develop AI systems and AI architectures that can make decisions in a similar manner, while also taking into account the very significant differences between an AI system and a human being. In particular, we draw inspiration from the theory of how humans make judgements which states that when people make decisions, they often employ one of two modes. One is what we call “thinking fast,” which is a very reactive, intuitive, and almost unconscious way to make decisions, that we use most of the time and that makes it easy to make day-to-day decisions. The other is “thinking slow,” which is more deliberate, conscious and requires a lot of attention on how we make choices. Eventually, we figure out when to use one or the other for every problem that we need to solve, increasing the use of the fast thinking modality as we become more familiar with a problem. We wanted to investigate what this kind of gradual, incremental learning might look like within an AI architecture since, for example, learning to drive is quite tough at first, when we first start driving, therefore we think slowly and carefully and after a while, we do it almost without thinking. As a result, we designed an architecture where we have some problem-solving methods that are comparable to the thinking-fast human modality, and that solve problems just using past experience, like machine learning or data-driven approaches, and other solving engines which instead reason about the problem to be solved, in a way similar to the human thinking-slow, and can be supported by symbolic logic-based Ai approaches. These solving engines are paired with a metacognitive component that determines which solving mode to employ depending on the nature of the problem at hand and on how the fast thinking solver are capable of solving it. Our research team aims to determine if these architectures could exhibit emergent behavior similar to that of a human, transitioning over time from thinking slowly to thinking fast, or if this machine could behave and make better decisions than just the thinking fast solvers or the thinking slow solvers alone. The current experimental results are very promising in both these dimensions, and they show the usefulness of an AI architecture where existing solvers can be plugged in in a very modular fashion. We are also studying the use of our AI architecture in a human-AI collaborative decision framework, where AI can also nudge humans to use their slow thinking to introduce friction and avoid overtrusting AI. This project is also very interesting in its multi-disciplinarity, since it involves researchers from various disciplines besides AI, including philosophy and cognitive science, which I strongly believe it is increasingly important in AI research.

About Francesca Rossi

Francesca Rossi is an IBM Fellow and the IBM AI Ethics Global Leader. She is based at the T.J. Watson IBM Research Lab, New York, USA, where she leads research projects and she co-chairs the IBM AI Ethics board. Her research interests focus on artificial intelligence, with special focus on constraint reasoning, preferences, multi-agent systems, computational social choice, neuro-symbolic AI, cognitive architectures, and value alignment. On these topics, she has published over 220 scientific articles in journals and conference proceedings, and as book chapters. She is a fellow of both the worldwide association of AI (AAAI) and the European one (EurAI). She has been president of IJCAI (International Joint Conference on AI) and is the current president of AAAI. She is a member of the board of the Partnership on AI and she co-chairs the Responsible AI working group of the Global Partnership on AI. She also co-chairs the OECD Expert Group on AI Futures and she has been a member of the European Commission High Level Expert Group on AI.



tags: ,


Andrea Rafai is a Ph.D. candidate at the School of Politics and International Relations, East China Normal University
Andrea Rafai is a Ph.D. candidate at the School of Politics and International Relations, East China Normal University




            AIhub is supported by:


Related posts :



AIhub coffee corner: Open vs closed science

The AIhub coffee corner captures the musings of AI experts over a short conversation.
26 April 2024, by

Are emergent abilities of large language models a mirage? – Interview with Brando Miranda

We hear about work that won a NeurIPS 2023 outstanding paper award.
25 April 2024, by

We built an AI tool to help set priorities for conservation in Madagascar: what we found

Daniele Silvestro has developed a tool that can help identify conservation and restoration priorities.
24 April 2024, by

Interview with Mike Lee: Communicating AI decision-making through demonstrations

We hear from AAAI/SIGAI Doctoral Consortium participant Mike Lee about his research on explainable AI.
23 April 2024, by

Machine learning viability modelling of vertical-axis wind turbines

Researchers have used a genetic learning algorithm to identify optimal pitch profiles for the turbine blades.
22 April 2024, by

The Machine Ethics podcast: What is AI? Volume 3

This is a bonus episode looking back over answers to our question: What is AI?
19 April 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association