ΑΙhub.org
 

#IJCAI2019 main conference in tweets


by
13 August 2019



share this:


The main IJCAI2019 conference started on August 13th. The organizers gave the opening remarks and statistics, and announced the award winners for this year.

The Opening Ceremony

IJCAI2019 numbers

Special track
https://twitter.com/JGSchaeffer/status/1161258357706989569
Some of the IJCAI2019 Awards

Talks
“Doing for robots what Evolution did for us” by Leslie Kaelbling.

“Human-level intelligence or animal-like abilities” by Adnan Darwiche.

Diversity in AI panel discussion

Demos and booths
Demos and booths of different companies took place next to different poster sessions.

Paper presentation sessions were happening at the same time in other venues.

Robot challenge

 




Nedjma Ousidhoum is a postdoc at the University of Cambridge.
Nedjma Ousidhoum is a postdoc at the University of Cambridge.




            AIhub is supported by:



Related posts :



New AI technique sounding out audio deepfakes

  21 Nov 2025
Researchers discover a smarter way to detect audio deepfakes that is more accurate and adaptable to keep pace with evolving threats.

Learning robust controllers that work across many partially observable environments

  20 Nov 2025
Exploring designing controllers that perform reliably even when the environment may not be precisely known.

ACM SIGAI Autonomous Agents Award 2026 open for nominations

  19 Nov 2025
Nominations are solicited for the 2026 ACM SIGAI Autonomous Agents Research Award.

Interview with Mario Mirabile: trust in multi-agent systems

  18 Nov 2025
We meet ECAI Doctoral Consortium participant, Mario, to find out more about his research.

Review of “Exploring metaphors of AI: visualisations, narratives and perception”

and   17 Nov 2025
A curated research session at the Hype Studies Conference, “(Don’t) Believe the Hype?!” 10-12 September 2025, Barcelona.

Designing value-aligned autonomous vehicles: from moral dilemmas to conflict-sensitive design

  13 Nov 2025
Autonomous systems increasingly face value-laden choices. This blog post introduces the idea of designing “conflict-sensitive” autonomous traffic agents that explicitly recognise, reason about, and act upon competing ethical, legal, and social values.

Learning from failure to tackle extremely hard problems

  12 Nov 2025
This blog post is based on the work "BaNEL: Exploration posteriors for generative modeling using only negative rewards".



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence