Illuminating the Value — and Earning Users’ Trust — in AI

Trust-Enhancing Strategies for AI

Have the personalized recommendations on a streaming service ever let you down? This might leave a bad taste in your mouth, and make you wonder whether the hype over automated technology is all it’s chalked up to be. But it’s also easy to forget that before advances like personalized streaming services, we spent what seemed like hours in movie store aisles…and still watched movies we didn’t enjoy. 

While there are undoubted customer benefits that have come with automated, personalized technologies across industries — primarily driven by AI algorithms — a gulf understandably remains between the power of AI and consumers’ trust in it. The formidable advances we’ve seen in AI have surpassed many human forecasters’ expectations in games like chess and Go that rely a lot on prediction power. However, although most trust game AI, people struggle to follow “AI masters” beyond board games. The potential for big losses also prevents users from trusting algorithms. In the cases of self-driving car systems, high-profile news outlets report on fiery crashes, and people may be scared of self-driving cars without realizing that the cars humans drive themselves could have a significantly higher accident rate. 

In all, in order for consumers to recognize the full value in AI we need to address both algorithmic aversion and losses under uncertainty. In order to better earn trust and develop lifelong, loyal relationships with their customers, the onus is on the companies making investments in AI to educate customers on the benefits of the services they’re offering, provide transparency on why they’re receiving the recommendation they’re being served, and offer them a level of feedback and engagement mechanisms to convey what’s working for them and what isn’t. 

Recent research from Bar Ilan University and Carnegie Mellon University points to three behavioral patterns that could help users get a better sense of its value and benefits: transparency, reliability and immediacy. While transparency is necessary, it is not sufficient, and immediacy and reliability are also key to mitigate aversion and to address perceptions of loss and fairness. Reliability shows users the overall AI benefits even though the systems are not perfect. Immediacy allows team developing AI solutions to proactively protect users and meet their needs in the moment, leading to customized solutions that can be a great way to mitigate the effects of bias in the data or limitations in the algorithm. 

Research-backed insights offer strategies for success

The research suggests to me three design steps as a way to incorporate these principles into a path greater trust and human-centricity in AI.

Convey: Researchers found that transparency is critical to build customer trust in AI, but there is a balance of how much information is useful. For example, releasing some model performance information improved trust, but too much information overwhelmed the user. 

Developing tools for user transparency can mitigate fear of the unknown and offer a sense of understanding and agency over the decisions they’re being served. For example, a streaming music service could make clear that it’s recommending a custom playlist based on the user’s history of listening to a certain genre of artists in the past month.

Customize: Proactivity and customization are examples of immediacy behavior that can further increase trust in AI. Bar Ilan and Carnegie Mellon compared two forms of robotic behavior to study users’ cognitive and emotional trust: the reactive robot only provided help after the participant failed the task, while the proactive one could anticipate participants’ needs and offer to help before the task was completed. Survey results showed that the proactive robot achieved better performance, involvement, and recognition from the participants, even controlling for the technical accuracy of each robot in performing the task. 

What might this mean for companies offering automated products and services? Every user brings their own unique context, goals, and needs to their interaction with a company’s product or service. AI systems should be built with the infrastructure and adaptability to shift and respond based on a user’s real-time engagement and evolving needs. This enables customized experiences to remain dynamic, responsive, and adaptive to user intent — and it’s where consideration and co-creation come in.

Consider and Co-Create: Reliability ensures the accuracy and stability of an AI system, and can help lead to greater understanding and trust of that system. Low reliability is tied to decreased trust, so it’s important to ensure the autonomous services that are created for customers work consistently and accurately. And in most cases, offering personalized experiences based on a user’s interactions and preferences is table stakes for most companies leveraging AI today. 

But how can companies begin to bring users into that process to fine-tune, refine, and co-create an experience that will be most meaningful to them? One simple example is a thumbs up or thumbs down feature on a mobile app — does the customer like this recommendation or not? That sentiment can then be used to retrain the algorithm and improve the user experience in future interactions, ideally providing transparency to the feedback loop that exists (“you are seeing this recommendation because you gave a thumbs up to similar recommendations in the past”).

Collaboration is key to making AI successful

Ultimately, everyone involved in the creation of AI applications should play a part in bringing these trust-enhancing strategies to life. This includes the cross-functional group of data scientists, engineers, product managers, designers, business analysts, model risk experts, and others involved in the development and deployment of AI models.

As customer expectations evolve and technology becomes more advanced, a research-based, human-centered approach to developing AI systems and experiences can help companies envision how their customers will interact with them, and hopefully, how they will see value and meaning in the benefits they provide.

A special thanks to Mauricio Medeiros Junior, Principal Associate, Quantitative Analysis, and Shan Shi, Principal Associate, Data Science for their contributions to this article.


Dov Haselkorn, EVP, Chief Model Risk Officer

Dov Haselkorn holds dual roles as Chief Operational Risk Officer and the Chief Model Risk Officer. He leads his team to manage operational and model risk, driving material impact for Capital One. He provides balanced and effective counsel on the level of risk to undertake and minimize, and he is sought out for risk insight and judgment. In just two years, Dov has revamped Model Policy to incorporate Machine Learning (ML)-specific risks, optimized Board and Committee reporting, and delivered outsized impact and strategic influence by building scalable tools used across the Enterprise. In addition, Dov’s focus on talent has resulted in high recruitment, retention, and engagement across both OMRM and the Data Science & Quantitative Analysis (DSQ) job family. He leverages direct and decisive communication that resonates with key stakeholders and his associates, and his leadership creates a strong sense of connection and culture of belonging with his teams.

Related Content

Tech associates mapping out strategy process on glass wall
Article | March 21, 2022
Tech associate working on code at computer
Article | May 9, 2022
Light blue background with brain cloud illustration
Article | August 15, 2022 |5 min read