ΑΙhub.org
 

“I am here to assist you today” – how we respond to chatbots


by
05 July 2021



share this:
Speech bubble | NLP

Photo by Volodymyr Hryshchenko on Unsplash

As online users we are increasingly exposed to chatbots as one form of AI-enabled media technologies. Stand-alone chatbots are often used for product or service recommendations, for example when online shopping or making financial or health-related decisions. What is the persuasive potential of these chatbots? Carolin Ischen studies how our perceptions of AI and our experience with chatbots influence our attitudes and behaviour.

Carolin Ischen is a PhD candidate at the Persuasive Communication group and the Digital Communication Methods Lab of the Communication Science department at the University of Amsterdam. She studies the persuasive potential of AI-driven media, specifically chatbots. “Chatbots reshape today’s media environment and move the communication science field from computer-mediated communication into human-machine communication. I therefore study technology as communicators and not only tools for communication”, tells Ischen.

In her research Ischen focuses on chatbots that recommend products or services, like a recipe or insurance, and how our perceptions of and experience with these non-human assistants influence our behaviour. “People tend to respond to computers in comparable ways as they do to humans. I look at how people enjoy the interaction with the chatbot and how they judge the content and brand information that is offered to them”, Ischen adds.

Carolin Ischen
Carolin Ischen

Enjoyment is the key mechanism

In a first experiment Ischen and colleagues examined the effects of interacting with a stand-alone chatbot compared to more traditional interactive websites. With the use of tailormade websites and virtual assistants they randomly asked a representative sample of the Dutch population to interact either with the chatbot or with the interactive website to receive a recommendation for a health insurance. Afterwards participants had to complete a survey to assess the experience: How human- or machine-like did the perceive the assistant? How much did they enjoy the interaction? Would they purchase the brand that was recommended?

Ischen et al found that enjoyment is the key mechanism explaining the positive effect of chatbots versus websites: the interaction with a stand-alone chatbot resulted in more enjoyable user experiences, which subsequently translated into higher persuasive outcomes. But contrary to expectations, perceived anthropomorphism (i.e., attribution of human-like characteristics), seemed not to be particularly relevant in this comparison. “The mere presentation of a chatbot as the source of communication as done in this study was likely not sufficient to increase human-likeness.”

Chatbots and privacy concerns

Ischen also looks at privacy concerns relating to chatbot-interactions. “While such concerns are widely studied in an online (website-) context, research in the context of chatbot-interaction is still lacking.” Ischen investigated the extent to which chatbots with human-like cues are attributed human-like characteristics and how this affects privacy concerns, information disclosure and recommendation adherence. She finds that people have fewer privacy concerns and disclose more information to chatbots when they perceive chatbots as more human-like, often leading to a follow up of the recommendation.

Ischen calls the study of chatbots in consumer interaction important in light of consumer awareness and empowerment. “As consumers we need to be aware that what is being communicated to us by chatbots is not neutral or bias free. Are we as consumers informed enough how this technology works and how to use it safely? What kind of regulations are needed for advertisement practices driven by AI?”

Further reading

“I am here to assist you today”: The role of entity, interactivity and experiential perceptions in chatbot persuasion, Ischen, C., Araujo, T. B., van Noort, G., Voorveld, H. A. M., & Smit, E. G. (2020).
Privacy concerns in chatbot interactions, Ischen, C., Araujo, T., Voorveld, H., van Noort, G., & Smit, E. (2020).




University of Amsterdam




            AIhub is supported by:


Related posts :



2024 AAAI / ACM SIGAI Doctoral Consortium interviews compilation

  20 Dec 2024
We collate our interviews with the 2024 cohort of doctoral consortium participants.

Interview with Andrews Ata Kangah: Localising illegal mining sites using machine learning and geospatial data

  19 Dec 2024
We spoke to Andrews to find out more about his research, and attending the AfriClimate AI workshop at the Deep Learning Indaba.

#NeurIPS social media round-up part 2

  18 Dec 2024
We pick out some highlights from the second half of the conference.

The Good Robot podcast: Machine vision with Jill Walker Rettberg

  17 Dec 2024
Eleanor and Kerry talk to Jill about machine vision's origins in polished volcanic glass, whether or not we'll actually have self-driving cars, and a famous photo-shopped image.

Five ways you might already encounter AI in cities (and not realise it)

  13 Dec 2024
Researchers studied how residents and visitors experience the presence of AI in public spaces in the UK.

#NeurIPS2024 social media round-up part 1

  12 Dec 2024
Find out what participants have been getting up to at the Neural Information Processing Systems conference in Vancouver.

Congratulations to the #NeurIPS2024 award winners

  11 Dec 2024
Find out who has been recognised by the conference awards.

Multi-agent path finding in continuous environments

and   11 Dec 2024
How can a group of agents minimise their journey length whilst avoiding collisions?




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association