ΑΙhub.org
 

AAAI 2021 Spring Symposium on Implementing AI Ethics

by
23 April 2021



share this:
AI meme

Once upon a time, there was the AAAI 2005 Fall Symposium on Machine Ethics. It is the one event perhaps that I wish I could have attended, otherwise I do not entertain desires of being older. Much of the work in machine ethics can be traced back to the discussions at that seminar, but they also seemed to have spurred many questions that we still find interesting. When the AAAI 2021 Spring Symposium on Implementing AI Ethics was announced, it was not difficult to clear the schedule for “a deeper discussion on how intelligence, agency, and ethics may intermingle in organizations and in software implementations”. The 2021 symposium has not per definition been the spiritual descendant of the 2005 seminar, but by composition of participants and discussion structure, I expect great ideas will be developed after this one as well.

Let us take it from the top. What is machine ethics? No. No! Walk your mind away from the trolley problems!

Machine ethics, per that 2005 symposium, “is concerned with the behavior of machines towards human users and other machines”. Machines here include both software and hardware. The machine ethics definition shifts in two directions. Pulled to the philosophy side, machine ethics studies what theories of ethics should be implemented by and applied to machines, if any. Pulled to the computer science side, machine ethics studies how to automate moral reasoning.

The AAAI 2021 symposium was on the topic of implementing AI Ethics. AI Ethics is a larger discipline that has emerged since 2005 and subsumes machine ethics (according to some but not to all). Pinpointing a definition of what is AI ethics is a bit trickier. AI ethics can be considered an umbrella term that consolidates research on how to ensure that AI has a nonnegative ethical footprint on society. There are two general directions in AI ethics. One consolidates efforts to ensure that AI research and applications are under adequate human oversight. The second is ensuring that AI is built with the capabilities to behave in a way that the user and societal values are. We can recognise the machine ethics in this second direction. The first direction subsumes research in accountability, trust, transparency, explainability, fairness, privacy and responsible AI.

The AAAI 2021 symposium asked the hard question: how do we implement AI ethics? The symposium was organised by people who have been working on that same question for at least a decade now. They seem to very well recognise that this is a seriously interdisciplinary field. Hence, the 70+ participants were greeted with an “educate yourself” list of resources. The symposiums are not conferences, as in, there is no talking at people but talking with people. This is what we have dearly missed in this zoomscape that is our life now: thoughtful constructive discussions about science. The discussion sessions spanned three days and were organised around topics of machine ethics, the values that AI ethics should be concerned about, and policy to direct AI ethics operationalisation. Sadly there was the well-intentioned parallel session idea for some of the sessions: one for the American time zone and one for the European, which meant all participants rarely met each other.

The symposium opened with a very strong insight from Joanna Bryson that we use terms such as trust and responsibility to machines, but in reality these are human terms that can only apply to humans. We trust the sell by date on the milk carton to be correct information, but that trust is placed on a system of certification, customer protection and quality control, not on the carton itself. AI suffers from being too easy to anthropomorphize

We concluded the symposium with summaries on three tracks: the corporate agenda, the public policy agenda and the research agenda. For each of these agendas, the participants identified actionable suggestions, insights and open questions. I will highlight some open questions from each agenda.

In the corporate agenda, the focus is on how to empower organisations of all sizes to implement AI principles and guidelines. There is a disconnect between agreeing on guidelines and operationalising that agreement on all levels of the organisation, but also in procurement of AI systems. There is also a requirement for global implementation of guidelines while there is not global authority to enforce such implementation.

In the policy agenda, issues of standardisation and certification have taken much of the focus. It is becoming more and more clear that, to implement, AI ethics requires a network of systems, tools, policies, regulations and good will. With ethical impact, it is not only hard to ameliorate it, but it is also hard to recognise that some technology has impact. How can we identify the undesirable behaviour of software systems? Who is able to do this task and what support should be put in place to help them do something about it? The issues discussed in the policy agenda highlight once again that we are lacking the right concepts and taxonomies to even describe and analyze issues of AI ethical impact. For example, the term transparency is used as a value that an algorithm should embody, as a value that we want in our society but it also describes the amount of information on an AI system that is available for inspection to different regulators. Moving forwards requires awareness of the different concepts hidden behind the same name.

In the research agenda we tried to pinpoint what we still do not know and do not have to advance AI ethics and specifically machine ethics. Automating moral reasoning forces new rigour on theories from humanities and social sciences. Where our human common sense has been enough to make the right choice we now need to formalise and measure a lot of features of options, situations and trade-offs.

AI ethics can be a Lewis Carroll type of a rabbit hole – a discussion can fall apart quickly into a surrealist confusion. This is because AI is a mirror of our society. When discussing AI ethics it is easy to slip into discussing how to fix what is wrong with humanity instead of understanding the mirror itself. Some of us are habitual trespassers and we need the patience and vigilance to understand the language and accomplishments of other disciplines. The AAAI 2021 Spring Symposium on Implementing AI Ethics gave us a nice virtual space for finding a common language. Perhaps to advance the state of AI ethics we need to think of a different kind of research group: one that exists across faculties and has a door open for policy makers.

Look for a report on this event in the AAAI magazine and a special issue of extended abstracts from the participants of the symposium.



tags:


Marija Slavkovik is an associate professor in AI at the University of Bergen
Marija Slavkovik is an associate professor in AI at the University of Bergen




            AIhub is supported by:


Related posts :



Training AI requires more data than we have — generating synthetic data could help solve this challenge

The rapid rise of generative AI has brought advancements, but it also presents significant risks.
26 July 2024, by

Congratulations to the #ICML2024 award winners

Find out who won the Test of Time award, and the Best Paper award at ICML this year.
25 July 2024, by

#ICML2024 – tweet round-up from the first few days

We take a look at what participants have been getting up to at the International Conference on Machine Learning.
24 July 2024, by

International collaboration lays the foundation for future AI for materials

Presenting an extended version of the Open databases integration for materials design (OPTIMADE) standard.
23 July 2024, by

#RoboCup2024 – daily digest: 21 July

In the last of our digests, we report on the closing day of competitions in Eindhoven.
21 July 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association