ΑΙhub.org
 

Springing into conferences (at the AAAI spring symposium)


by
01 June 2019



share this:

In March of this year I was lucky to travel to my first academic conference (thanks very much to EPSRC Rise and AAAI). Feeling a little bit nervous, extremely jet-lagged, and completely in awe of Stanford’s gorgeous campus, I attended the symposium on “Interpretable AI for Well-Being: Understanding Cognitive Bias and Social Embeddedness”.This was only one of nine symposia run during the 3-day event. My PhD research aims to create an educational framework to teach people about AI (particularly focused on those who tend to be missed by other initiatives such as mainstream education and workplace retraining). I chose this symposium based on a report by KPMG that people in the UK are most willing to share their personal data with the public health service, including for AI use to improve care. I was hoping to come out with a list of studies and ideas that could be used in my research on AI education. Despite not being what I had in mind (my thoughts were more mainstream – chatbots for depression and fitness trackers), the talks throughout the two and a half days were interesting, inspiring and provided lots of food for thought.

I was extremely thankful for the introduction by the symposium organiser, Takashi Kido. He provided useful definitions and rich examples which were paramount to my understanding of the key concepts. Some useful terms, that were used throughout the symposium:
Interpretable AI is AI whose actions can be easily understood by humans, rather than being the stereotypical ‘black-box’ neural network.
Well being AI is an AI that aims to promote psychological well-being (happiness) and maximise human potential.
Social embeddedness is a term taken from economic theory to describe how closely related the economy and social issues are. In AI, it refers to how much of a role AI will play in future economics.
Cognitive bias is a systematic error in thinking that affects the decisions and judgements that people make.

Cognitive bias is a very hot-topic in the AI world at the minute, and I thought the Cognitive Bias Codex (below) was extremely insightful into just how many types of cognitive bias exist. One newer type of cognitive bias caught my attention, Google Effects which is the belief (held by myself) that making good questions with key words is better than remembering.

Source: Wikipedia

A few of the conference highlights for me were:
A wonderful talk on whether AI can be used in health care, with the example of AI predicting sleep apnea syndrome. For AI to be trusted in healthcare it needs to overcome four problems: hard interpretation, inappropriate data, cognitive bias, and generality of outcomes. The talk discussed how all these can be addressed. My favourite points were how social embeddedness can overcome generality by using sensors to provide individualised data, and using decision trees rather than neural networks can make AI more interpretable.

A lively discussion on the concept of Universal Basic Property where every human would be given a property or some land on which to live. This concept would be dependant on strict population controls which spurred debate about humans needing to move to Mars! The talk on this covered some thought-provoking points on ‘robot taxes’ and Universal Basic Income.

Meeting someone I follow on Twitter in real life. A Developer Advocate for Kaggle was at the conference speaking at another symposium. After a quick online exchange, we met for a coffee which was lovely (and actually turned out to be useful for my PhD research!).

After this I was actually keen to present at a conference – being delightfully introverted, small talk makes me anxious so having something to talk about would make the networking part of conferences better for me. I have since submitted and will be presenting at the Doctoral Consortium at the 20th International Conference on Artificial Intelligence in Education in just a couple weeks.



tags:


Laura Gemmell Phd student at Bristol Robotics Laboratory working on AI education for those left behind. Founder of Taught By Humans.
Laura Gemmell Phd student at Bristol Robotics Laboratory working on AI education for those left behind. Founder of Taught By Humans.




            AIhub is supported by:


Related posts :



Interview with Joseph Marvin Imperial: aligning generative AI with technical standards

  02 Apr 2025
Joseph tells us about his PhD research so far and his experience at the AAAI 2025 Doctoral Consortium.

Forthcoming machine learning and AI seminars: April 2025 edition

  01 Apr 2025
A list of free-to-attend AI-related seminars that are scheduled to take place between 1 April and 31 May 2025.

AI can be a powerful tool for scientists. But it can also fuel research misconduct

  31 Mar 2025
While AI is allowing scientists to make technological breakthroughs, there’s also a darker side to the use of AI in science: scientific misconduct is on the rise.
monthly digest

AIhub monthly digest: March 2025 – human-allied AI, differential privacy, and social media microtargeting

  28 Mar 2025
Welcome to our monthly digest, where you can catch up with AI research, events and news from the month past.

AI ring tracks spelled words in American Sign Language

  27 Mar 2025
In its current form, SpellRing could be used to enter text into computers or smartphones via fingerspelling.

How AI images are ‘flattening’ Indigenous cultures – creating a new form of tech colonialism

  26 Mar 2025
AI-generated stock images that claim to depict “Indigenous Australians”, don’t resemble Aboriginal and Torres Strait Islander peoples.

Interview with Lea Demelius: Researching differential privacy

  25 Mar 2025
We hear from doctoral consortium participant Lea Demelius who is investigating the trade-offs and synergies that arise between various requirements for trustworthy AI.

The Machine Ethics podcast: Careful technology with Rachel Coldicutt

This episode, Ben chats to Rachel Coldicutt about AI taxonomy, innovating for everyone not just the few, responsibilities of researchers, and more.




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association