ΑΙhub.org
 

#IJCAI2019 mini-interviews – Claus Aranha from University of Tsukuba


by
14 August 2019



share this:

Meet Claus Aranha, Assistant Professor at the University of Tsukuba, Center of Artificial Intelligence Research (C-AIR).

What are you presenting at IJCAI?
I am organizing the werewolf track ANAC Competition.

 

Can you tell me more about ANAC?
ANAC stands for Automated Agent Negotiation Competition<sup>1</sup>. It is a competition where AI Agents negotiate with each other and/or humans. We have 6 leagues this year – Supply Chain Management, Werewolf (a social party game), Diplomacy (a board game), Human-Agent League, Agent-Agent League: Agent Negotiation with Partial Preferences and the GENIUS league.

 

What is the real world impact of agents negotiating with each other?
Take for example the Supply Chain Management (SCM) league. Right now in the SCM industry many stakeholders – from people who sell the raw material, to factories which produce goods, to shops who sell them are involved. Each one has their preferences on what they wish to get out of the deal and what they are ready to compromise on. They have to coordinate (i.e. negotiate) with each other all the time for timelines, prices, quantities, etc. It is a cumbersome and cognitively heavy job! 

Imagine a group of AI agents representing each stakeholder does this for you. Wouldn’t things become easier?

 

You said something about the werewolf track. Could you say more?
It is a social party game where agents are supposed to find who the werewolf is<sup>2</sup> in a setting where agents lie to each other and/or hide the truth about themselves. How can an agent deal with other agents who behave this way while the agent itself is also deceiving other agents — It is a hard challenge!

We had 90 teams this year out of which 70 sent an agent to the competition. We have 15 finalists and the top three will be discussed tomorrow (August 15)! Keep an eye out 🙂

 

How can I get involved?
Check out the AI Wolf project page3. You can also get some sample code here – https://github.com/caranha/AIWolfCompo

 

1http://web.tuat.ac.jp/~katfuji/ANAC2019/

2https://en.wikipedia.org/wiki/Mafia_(party_game)

3http://aiwolf.org/en/




Rahul Divekar is a PhD Candidate at the Department of Computer Science at Rensselaer Polytechnic Institute.
Rahul Divekar is a PhD Candidate at the Department of Computer Science at Rensselaer Polytechnic Institute.

            AIhub is supported by:



Subscribe to AIhub newsletter on substack



Related posts :

RWDS Big Questions: how do we balance innovation and regulation in the world of AI?

  06 Mar 2026
The panel explores the tensions, trade-offs and practical realities facing policymakers and data scientists alike.

Studying multiplicity: an interview with Prakhar Ganesh

  05 Mar 2026
What is multiplicity, and what implications does it have for fairness, privacy and interpretability in real-world systems?

Top AI ethics and policy issues of 2025 and what to expect in 2026

, and   04 Mar 2026
In the latest issue of AI Matters, a publication of ACM SIGAI, Larry Medsker summarised the year in AI ethics and policy, and looked ahead to 2026.

The greatest risk of AI in higher education isn’t cheating – it’s the erosion of learning itself

  03 Mar 2026
Will AI hollow out the pipeline of students, researchers and faculty that is the basis of today’s universities?

Forthcoming machine learning and AI seminars: March 2026 edition

  02 Mar 2026
A list of free-to-attend AI-related seminars that are scheduled to take place between 2 March and 30 April 2026.
monthly digest

AIhub monthly digest: February 2026 – collective decision making, multi-modal learning, and governing the rise of interactive AI

  27 Feb 2026
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

The Good Robot podcast: the role of designers in AI ethics with Tomasz Hollanek

  26 Feb 2026
In this episode, Tomasz argues that design is central to AI ethics and explores the role designers should play in shaping ethical AI systems.

Reinforcement learning applied to autonomous vehicles: an interview with Oliver Chang

  25 Feb 2026
In the third of our interviews with the 2026 AAAI Doctoral Consortium cohort, we hear from Oliver Chang.



AIhub is supported by:







Subscribe to AIhub newsletter on substack




 















©2026.02 - Association for the Understanding of Artificial Intelligence