ΑΙhub.org
 

Congratulations to the winners of the #AIES2024 best paper awards

by
28 October 2024



share this:

winners' medal
The Seventh AAAI/ACM Conference on AI, Ethics, and Society (AIES-24) was held in San Jose, California from October 21-23, 2024. During the opening session of the conference, the best paper award winners were announced. These are as follows:

Best paper

Red-Teaming for Generative AI: Silver Bullet or Security Theater?
Michael Feffer, Anusha Sinha, Wesley H. Deng, Zachary C Lipton, Hoda Heidari

Abstract: In response to rising concerns surrounding the safety, security, and trustworthiness of Generative AI (GenAI) models, practitioners and regulators alike have pointed to AI red-teaming as a key component of their strategies for identifying and mitigating these risks. However, despite AI red-teaming’s central role in policy discussions and corporate messaging, significant questions remain about what precisely it means, what role it can play in regulation, and how it relates to conventional red-teaming practices as originally conceived in the field of cybersecurity. In this work, we identify recent cases of red-teaming activities in the AI industry and conduct an extensive survey of relevant research literature to characterize the scope, structure, and criteria for AI red-teaming practices. Our analysis reveals that prior methods and practices of AI red-teaming diverge along several axes, including the purpose of the activity (which is often vague), the artifact under evaluation, the setting in which the activity is conducted (e.g., actors, resources, and methods), and the resulting decisions it informs (e.g., reporting, disclosure, and mitigation). In light of our findings, we argue that while red-teaming may be a valuable big-tent idea for characterizing GenAI harm mitigations, and that industry may effectively apply red-teaming and other strategies behind closed doors to safeguard AI, gestures towards red-teaming (based on public definitions) as a panacea for every possible risk verge on security theater. To move toward a more robust toolbox of evaluations for generative AI, we synthesize our recommendations into a question bank meant to guide and scaffold future AI red-teaming practices.

Read the paper in full here.


Best paper runner-up

The Code That Binds Us: Navigating the Appropriateness of Human-AI Assistant Relationships
Arianna Manzini, Geoff Keeling, Lize Alberts, Shannon Vallor, Meredith Ringel Morris, Iason Gabriel

Abstract: The development of increasingly agentic and human-like AI assistants, capable of performing a wide range of tasks on user’s behalf over time, has sparked heightened interest in the nature and bounds of human interactions with AI. Such systems may indeed ground a transition from task-oriented interactions with AI, at discrete time intervals, to ongoing relationships — where users develop a deeper sense of connection with and attachment to the technology. This paper investigates what it means for relationships between users and advanced AI assistants to be appropriate and proposes a new framework to evaluate both users’ relationships with AI and developers’ design choices. We first provide an account of advanced AI assistants, motivating the question of appropriate relationships by exploring several distinctive features of this technology. These include anthropomorphic cues and the longevity of interactions with users, increased AI agency, generality and context ambiguity, and the forms and depth of dependence the relationship could engender. Drawing upon various ethical traditions, we then consider a series of values, including benefit, flourishing, autonomy and care, that characterise appropriate human interpersonal relationships. These values guide our analysis of how the distinctive features of AI assistants may give rise to inappropriate relationships with users. Specifically, we discuss a set of concrete risks arising from user–AI assistant relationships that: (1) cause direct emotional or physical harm to users, (2) limit opportunities for user personal development, (3) exploit user emotional dependence, and (4) generate material dependencies without adequate commitment to user needs. We conclude with a set of recommendations to address these risks.

Read the paper in full here.


Best student paper

Automate or Assist? The Role of Computational Models in Identifying Gendered Discourse in US Capital Trial Transcripts
Andrea W Wen-Yi, Kathryn Adamson, Nathalie Greenfield, Rachel Goldberg, Sandra Babcock, David Mimno, Allison Koenecke

Abstract: The language used by US courtroom actors in criminal trials has long been studied for biases. However, systematic studies for bias in high-stakes court trials have been difficult, due to the nuanced nature of bias and the legal expertise required. Large language models offer the possibility to automate annotation. But validating the computational approach requires both an understanding of how automated methods fit in existing annotation workflows and what they really offer. We present a case study of adding a computational model to a complex and high-stakes problem: identifying gender-biased language in US capital trials for women defendants. Our team of experienced death-penalty lawyers and NLP technologists pursue a three-phase study: first annotating manually, then training and evaluating computational models, and finally comparing expert annotations to model predictions. Unlike many typical NLP tasks, annotating for gender bias in months-long capital trials is complicated, with many individual judgment calls. Contrary to standard arguments for automation that are based on efficiency and scalability, legal experts find the computational models most useful in providing opportunities to reflect on their own bias in annotation and to build consensus on annotation rules. This experience suggests that seeking to replace experts with computational models for complex annotation is both unrealistic and undesirable. Rather, computational models offer valuable opportunities to assist the legal experts in annotation-based studies.

Read the paper in full here.


Best student paper runner-up

You Still See Me: How Data Protection Supports the Architecture of AI Surveillance
Rui-Jie Yew, Lucy Qin, Suresh Venkatasubramanian

Abstract: Data forms the backbone of artificial intelligence (AI). Privacy and data protection laws thus have strong bearing on AI systems. Shielded by the rhetoric of compliance with data protection and privacy regulations, privacy-preserving techniques have enabled the extraction of more and new forms of data. We illustrate how the application of privacy-preserving techniques in the development of AI systems–from private set intersection as part of dataset curation to homomorphic encryption and federated learning as part of model computation–can further support surveillance infrastructure under the guise of regulatory permissibility. Finally, we propose technology and policy strategies to evaluate privacy-preserving techniques in light of the protections they actually confer. We conclude by highlighting the role that technologists could play in devising policies that combat surveillance AI technologies.

Read the paper in full here.


The open-access proceedings from the conference are available here.



tags: , , ,


AIhub is dedicated to free high-quality information about AI.
AIhub is dedicated to free high-quality information about AI.




            AIhub is supported by:


Related posts :



The Machine Ethics podcast: Socio-technical systems with Lisa Talia Moretti

In this episode Ben chats to Lisa about data and AI literacy, data governance, ethical frameworks, and more.
01 November 2024, by

Building trust in AI: Transparent models for better decisions

AI is becoming a part of our daily lives, from approving loans to diagnosing diseases. But if we can't understand the decisions the models output, how can we trust them?
31 October 2024, by

Congratulations to the #ECAI2024 outstanding paper award winners

Find out which articles won the ECAI and PAIS 2024 awards.
30 October 2024, by

AIhub monthly digest: October 2024 – Nobel Prizes, the AI Song Contest, and towards safe and reliable AI agents

Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.
29 October 2024, by

Tweet round up from #ECAI2024: part 2

We took a look at what participants got up to over the second half of the European Conference on Artificial Intelligence.
25 October 2024, by

Interview with Pulkit Verma: Towards safe and reliable behavior of AI agents

We hear from AAAI/SIGAI doctoral consortium participant Pulkit Verma.
24 October 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association