ΑΙhub.org
 

“I am here to assist you today” – how we respond to chatbots


by
05 July 2021



share this:
Speech bubble | NLP

Photo by Volodymyr Hryshchenko on Unsplash

As online users we are increasingly exposed to chatbots as one form of AI-enabled media technologies. Stand-alone chatbots are often used for product or service recommendations, for example when online shopping or making financial or health-related decisions. What is the persuasive potential of these chatbots? Carolin Ischen studies how our perceptions of AI and our experience with chatbots influence our attitudes and behaviour.

Carolin Ischen is a PhD candidate at the Persuasive Communication group and the Digital Communication Methods Lab of the Communication Science department at the University of Amsterdam. She studies the persuasive potential of AI-driven media, specifically chatbots. “Chatbots reshape today’s media environment and move the communication science field from computer-mediated communication into human-machine communication. I therefore study technology as communicators and not only tools for communication”, tells Ischen.

In her research Ischen focuses on chatbots that recommend products or services, like a recipe or insurance, and how our perceptions of and experience with these non-human assistants influence our behaviour. “People tend to respond to computers in comparable ways as they do to humans. I look at how people enjoy the interaction with the chatbot and how they judge the content and brand information that is offered to them”, Ischen adds.

Carolin Ischen
Carolin Ischen

Enjoyment is the key mechanism

In a first experiment Ischen and colleagues examined the effects of interacting with a stand-alone chatbot compared to more traditional interactive websites. With the use of tailormade websites and virtual assistants they randomly asked a representative sample of the Dutch population to interact either with the chatbot or with the interactive website to receive a recommendation for a health insurance. Afterwards participants had to complete a survey to assess the experience: How human- or machine-like did the perceive the assistant? How much did they enjoy the interaction? Would they purchase the brand that was recommended?

Ischen et al found that enjoyment is the key mechanism explaining the positive effect of chatbots versus websites: the interaction with a stand-alone chatbot resulted in more enjoyable user experiences, which subsequently translated into higher persuasive outcomes. But contrary to expectations, perceived anthropomorphism (i.e., attribution of human-like characteristics), seemed not to be particularly relevant in this comparison. “The mere presentation of a chatbot as the source of communication as done in this study was likely not sufficient to increase human-likeness.”

Chatbots and privacy concerns

Ischen also looks at privacy concerns relating to chatbot-interactions. “While such concerns are widely studied in an online (website-) context, research in the context of chatbot-interaction is still lacking.” Ischen investigated the extent to which chatbots with human-like cues are attributed human-like characteristics and how this affects privacy concerns, information disclosure and recommendation adherence. She finds that people have fewer privacy concerns and disclose more information to chatbots when they perceive chatbots as more human-like, often leading to a follow up of the recommendation.

Ischen calls the study of chatbots in consumer interaction important in light of consumer awareness and empowerment. “As consumers we need to be aware that what is being communicated to us by chatbots is not neutral or bias free. Are we as consumers informed enough how this technology works and how to use it safely? What kind of regulations are needed for advertisement practices driven by AI?”

Further reading

“I am here to assist you today”: The role of entity, interactivity and experiential perceptions in chatbot persuasion, Ischen, C., Araujo, T. B., van Noort, G., Voorveld, H. A. M., & Smit, E. G. (2020).
Privacy concerns in chatbot interactions, Ischen, C., Araujo, T., Voorveld, H., van Noort, G., & Smit, E. (2020).




University of Amsterdam




            AIhub is supported by:



Related posts :

Interview with Kate Larson: Talking multi-agent systems and collective decision-making

  27 Jan 2026
AIhub ambassador Liliane-Caroline Demers caught up with Kate Larson at IJCAI 2025 to find out more about her research.

#AAAI2026 social media round up: part 1

  23 Jan 2026
Find out what participants have been getting up to during the first few of days at the conference

Congratulations to the #AAAI2026 outstanding paper award winners

  22 Jan 2026
Find out who has won these prestigious awards at AAAI this year.

3 Questions: How AI could optimize the power grid

  21 Jan 2026
While the growing energy demands of AI are worrying, some techniques can also help make power grids cleaner and more efficient.

Interview with Xiang Fang: Multi-modal learning and embodied intelligence

  20 Jan 2026
In the first of our new series of interviews featuring the AAAI Doctoral Consortium participants, we hear from Xiang Fang.

An introduction to science communication at #AAAI2026

  19 Jan 2026
Find out more about our session on Wednesday 21 January.

Guarding Europe’s hidden lifelines: how AI could protect subsea infrastructure

  15 Jan 2026
EU-funded researchers are developing AI-powered surveillance tools to protect the vast network of subsea cables and pipelines that keep the continent’s energy and data flowing.


AIhub is supported by:







 













©2026.01 - Association for the Understanding of Artificial Intelligence