英语轻松读发新版了,欢迎下载、更新

How AI-powered chatbots can make or break consumer trust

2025-04-05 08:50:55 英文原文

作者:by Jennifer Walter, University of Wisconsin - Milwaukee

online shop
Credit: Unsplash/CC0 Public Domain

Chatbots—those little text bubbles that pop up in the corner of so many consumer sites—have long been a fixture in the digital world. Now, the growing popularity of generative AI programs has only supercharged their presence, and their abilities.

Conversations with ChatGPT and similar apps are getting more realistic by the day. Artificial intelligence-powered chatbots are now woven into many businesses' customer service, outreach and sales approaches.

But how is this widespread AI adoption affecting ? That's a question for Scott Schanke, an assistant professor in UWM's Lubar College of Business. His work, which is one of several AI-focused research projects at UWM, centers on the design of AI agents for public-facing business interactions, and how different interfaces can make or break consumer trust.

"AI agents (can) fill this sort of human-facing job role," Schanke said. "Maybe it's collecting information or facilitating a sale."

A lot goes into making sure consumers actually finish filling out a form or complete a purchase. Different traits can sway a person's interaction with a chatbot, and ultimately an organization's ability to gain their trust.

Exploring how chatbots shape consumer interaction will give businesses valuable insight into the best ways to deploy new AI technologies. This includes other formats as well, such as clones. Schanke's work will also help researchers pinpoint future uses for the technology—both constructive and nefarious.

"The whole idea here is that we need to try and be forward-looking," Schanke says. "This is sort of an inflection point that we're starting to see with a lot of these generative AI technologies, where … we don't really know what the potential downsides are."

Chatbots in context

For a 2021 study in the journal Information Systems Research, Schanke and colleagues explored how chatbot humanization impacted a customer's likelihood of accepting an offer. They partnered with a secondhand clothing retailer to automate their clothing buyback process.

Schanke designed a chatbot for the company with varying degrees of human-like qualities. Some versions told jokes, took longer pauses between replies or told the customer their name. Ultimately, anthropomorphism helped the bots secure more sales—but it came with a cost.

Consumers didn't tend to push back on offers that came from computer-like bots. "Meaning, if you seem more like a bot, I am more willing to take a lower offer because I'm not thinking about any sort of intent behind the offer," Schanke says. On the other hand, when bots seemed more human, customers focused more on negotiating to get the best price.

In other contexts, such as charitable giving, anthropomorphism also comes with drawbacks. For a report that is currently under review, Schanke partnered with a social justice organization in Minneapolis to deploy a that interacts with potential donors. Using AI-powered chatbots could help charitable organizations, which typically have fewer resources than corporations, Schanke said.

"A lot of these organizations… it's hard for them to stay afloat. And I think chatbots could be a way to help them automate certain processes," he said. But when chatbots appear too human, potential donors are less likely to open their wallets.

The big reason is that charities tend to operate in more emotional contexts. Asking for donations for flood victims or people facing food insecurity, for example, feels much more high-stakes than selling used clothes. "Having high degrees of anthropomorphism as well as high degrees of emotional appeals are counterproductive because it's already an emotional context and it's almost too abrasive to people," Schanke said.

Logical, bot-like approaches, on the other hand, resulted in more conversions in the outreach process. Ultimately, context matters when deploying chatbots in different settings, and it's important for organizations to know which traits will push or pull consumers away.

How AI-powered chatbots can make or break consumer trust
Scott Schanke, an assistant professor in UWM’s Lubar College of Business, studies the design of artificial intelligence agents for public-facing business interactions, and how different interfaces can make or break consumer trust. Credit: UWM Photo/Elora Hennessey

Familiar voices elicit consumer trust

While not as common as chatbots in consumer-facing settings, voice clones are the next frontier in AI-driven interaction. These bots, also known as audio deepfakes, mimic the voices of real human beings. AI voice programs only need a few seconds of audio from a real person talking to generate a hyper-realistic clone that can say just about anything.

"Folks have been using these mostly for parody," Schanke says, pointing to the many examples of TikTok videos where a celebrity appears to sing a song or recite a speech that they never actually said. But organizations are interested in how this technology could enhance customer support and outreach, much like chatbots.

The question is how much consumers will trust it—and how easily voice cloning can manipulate perception. For a report to be published in Management Science, Schanke and colleagues invited participants to talk with AI voice clones over the phone. They found that bots seemed more trustworthy when they cloned and spoke in the participant's own voice. And even in scenarios where the researchers told participants that the "person" on the other end was not to be trusted, they still believed what the bot told them when it was using their own voice.

"Even in that situation, when we give them this information, they're more willing to trust that other party, even when they know that this person is not trustworthy," Schanke says.

Additionally, in cases where the bots disclosed that they were, in fact, bots, participant trust still remained high. Knowing this could help lead to legislation to protect consumers against nefarious or misleading uses of AI.

Thinking five years ahead

Voice clones have already been used to carry out complex scams, create fake news reports and even rob a bank. Because of how easily they can generate a believable persona with just a few seconds of audio, the technology wields the power to manipulate unsuspecting people and supercharge malicious lies.

But it all depends on how voice clones are used. "My belief is that technology is neither good nor bad," Schanke said. There is potential for both positive and negative outcomes with generative AI tools—it just matters who's using them and for what purpose.

As a researcher, Schanke's goal is to explore the wide variety of possibilities that AI technologies present. Shedding light on how chatbots and voice clones can be used raises awareness for people who work with AI systems. It can also alert the public to the potentially manipulative applications and help make the case for consumer legislation.

"I think it's really important for us as researchers to think five years ahead," Schanke said. "How could we potentially protect people, or at least drive transparency that this is a potential risk?"

More information: Report: Enhancing Nonprofit Operations with AI Chatbots: The Role of Humanization and Emotion

Scott Schanke et al, Digital Lyrebirds: Experimental Evidence That Voice-Based Deep Fakes Influence Trust, Management Science (2024). DOI: 10.1287/mnsc.2022.03316. On SSRN: papers.ssrn.com/sol3/papers.cf … ?abstract_id=4839606

Citation: How AI-powered chatbots can make or break consumer trust (2025, April 5) retrieved 11 April 2025 from https://phys.org/news/2025-04-ai-powered-chatbots-consumer.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

关于《How AI-powered chatbots can make or break consumer trust》的评论


暂无评论

发表评论

摘要

Scott Schanke, an assistant professor at UWM's Lubar College of Business, studies how AI chatbots and voice clones impact consumer trust in business interactions. His research explores the balance between human-like qualities that can boost sales but may also lead to higher expectations and skepticism from consumers. Context plays a crucial role; while more human-like bots might be trusted more, they can reduce donations in emotional contexts like charity work. Schanke's work also examines how voice clones could manipulate consumer trust and emphasizes the need for transparency regarding AI technology uses to protect against potential risks.

相关新闻