ΑΙhub.org
 

How to benefit from AI without losing your human self – a fireside chat from IEEE Computational Intelligence Society


by
02 December 2024



share this:

The image is a very detailed, black-and-white sketch-like illustration featuring a complex scene of interconnected figures and technology. The artwork portrays various individuals in different environments to represent the relationship between technology and humans. 

In the foreground, multiple people are surrounded by computer screens filled with data visualisations, charts, and technical information. A woman seated in an armchair appears deep in thought, surrounded by data-filled monitors. Beside her, a man leans over, using a tablet to assist with their inspection of a plant or tree. In the centre, a figure holds a large frame or screen displaying anatomical illustrations, representing the use of AI to analyse medical imagery. To the left, another person is intently observing a computer screen, while a second figure nearby is deeply immersed in analysing data. A woman dominates the right side of the composition, gazing upwards as if in contemplation or envisioning something beyond the immediate scene. The background features more people, including a family holding hands, and other abstract representations of data.Ariyana Ahmad & The Bigger Picture / Better Images of AI / AI is Everywhere / Licenced by CC-BY 4.0

In this fireside chat from IEEE Computational Intelligence Society, Tayo Obafemi-Ajayi (Missouri State University) asks Hava T. Siegelmann (University of Massachusetts, Amherst) about how to benefit from AI without losing your human self.

You can watch the chat in full below:




IEEE Computational Intelligence Society




            AIhub is supported by:



Related posts :



Better images of AI on book covers

  25 Nov 2025
We share insights from Chrissi Nerantzi on the decisions behind the cover of the open-sourced book ‘Learning with AI’, and reflect on the significance of book covers.

What is AI poisoning? A computer scientist explains

  24 Nov 2025
Poisoning is a growing problem in the world of AI – in particular, for large language models.

New AI technique sounding out audio deepfakes

  21 Nov 2025
Researchers discover a smarter way to detect audio deepfakes that is more accurate and adaptable to keep pace with evolving threats.

Learning robust controllers that work across many partially observable environments

  20 Nov 2025
Exploring designing controllers that perform reliably even when the environment may not be precisely known.

ACM SIGAI Autonomous Agents Award 2026 open for nominations

  19 Nov 2025
Nominations are solicited for the 2026 ACM SIGAI Autonomous Agents Research Award.

Interview with Mario Mirabile: trust in multi-agent systems

  18 Nov 2025
We meet ECAI Doctoral Consortium participant, Mario, to find out more about his research.

Review of “Exploring metaphors of AI: visualisations, narratives and perception”

and   17 Nov 2025
A curated research session at the Hype Studies Conference, “(Don’t) Believe the Hype?!” 10-12 September 2025, Barcelona.

Designing value-aligned autonomous vehicles: from moral dilemmas to conflict-sensitive design

  13 Nov 2025
Autonomous systems increasingly face value-laden choices. This blog post introduces the idea of designing “conflict-sensitive” autonomous traffic agents that explicitly recognise, reason about, and act upon competing ethical, legal, and social values.



 

AIhub is supported by:






 












©2025.05 - Association for the Understanding of Artificial Intelligence