Even Bots Need to Build Character

Conversation designer Mindy Gold once prototyped a fitness bot with no stated name or gender. Its function was to keep female exercisers motivated with daily prompts.

While testing it with a potential customer, Mindy realized the woman was referring to the bot as “him.”

“He just told me I had to burn 50 more calories,” the woman told Mindy.

“How do you know it’s a male?” Mindy asked.

“Because now I’m mad at it.”

This phenomenon — to anthropomorphize objects around us — isn’t new. We name things like our cars and computers, and assign gender to things that are otherwise genderless. We consciously and subconsciously assign character, intent, and warmth (or lack thereof) to our interactions with inanimate objects from the ATM to our computers.

We do this because it’s the way we know how to build connections, which result in trust. We lovingly name our 14-year-old car because when it’s named something like Franklin, we trust it won’t betray us by breaking down on the highway. “C’mon, Franklin, don’t let me down” is natural, whereas “C’mon, 2001 Ford Taurus” doesn’t quite convey the same connection.

In the fitness example, the customer thought it was a man because she was mad it was telling her to keep going when she was tired. It was the emotion associated with it that led to her characterizing it. Today, we use emoji, emoticons and GIFs to convey tone amid the black-and-white nature of plain text.

It’s exactly this kind of anthropological phenomenon that piques the interest of designers in the AI space, as well as researchers, product managers, and technologists building inclusive machine learning systems. Because it raises this question: How might we achieve a true connection — trust — between our AI and the customer?

One of the most powerful ways to establish trust is to consistently demonstrate the integrity of your character in every interaction you have. That’s true in real life, and it’s true in AI.

This is especially crucial now, because we’re seeing a rising flood of AI-powered objects gathering data and responding to us in nearly every aspect of our daily lives — from our kitchen counter through our commute to work and beyond. It’s worth noting that this is not to everyone’s delight. There are aspects of AI that freak people out, but we can work toward a better relationship through this idea of building trust.

So those of us responsible for bringing AI-powered experiences to life have a very real challenge: To enable trust, we need to decide which character traits to explicitly demonstrate to our customers and which to leave open for interpretation and imagination.

That’s why designing the character of our AI is such a priority. When we launched Eno at SXSW in March, we became the first U.S. bank to launch a gender-neutral SMS chatbot who understands natural language.

(See? I even just wrote “who” instead of “that,” which is a signal I consider Eno more humanlike than not, even though when asked, Eno straight-up says, “I’m a bot and proud.”)

white iphone with white eno message screen containing grey and green message bubbles with white text

Designing the character of our AI enables us to have a foundation for building trust. And we don’t have to reinvent the wheel here necessarily. Consider how character-driven industries already achieve this level of brand loyalty through their work. Video games, Hollywood, and even streaming services are all creating original works. And all of it starts with a story.

Characters who themselves have backstories tell those stories. These characters have a personality that’s been so tightly defined in advance, their sole job is to establish trust with the audience and keep it until the end by never “breaking character.”

AI teams can commit to the same outcome by purposefully defining how we’re going to show up to our customers, consistently in every interaction, so we’re able to maintain the integrity of the character and brand — a foundation that’s crucial to enabling trust over time and as AI channels continue to proliferate.

Doing this requires you to assemble a mix of minds and consider several aspects of character development:

  • What is your bot’s name, and why?
  • Does it have a gender, and what principles guided your choice?
  • Where did the bot come from, and where does it want to go?
  • What are its functional limitations, and how will it respond when you see them?
  • What are its emotional boundaries, including its sense of humor and insecurities?

To think through these aspects of character (and more), prepare to find a brain trust of folks who all think differently, come from different backgrounds, and are willing to get into some heavily philosophical and ethical debates as a team. This is vital to creating an inclusive AI — if that’s a goal for you like it is for us — just as much as building its foundational capabilities.

Getting into debates about what kind of character you want to (or need to) design to establish trust with customers is an incredible opportunity. So don’t short-change yourself by doing it alone or surrounding yourself with like minds. Capitalize on it by being intentional about who you’re designing with, just as much as what you’re designing and why.

After all, if people are going to naturally — as human beings — assign character to objects, we have a responsibility to design for that. Being intentional and consistent with our decisions will enable those humans to trust our character as it’s manifested in the machine.

Learn more about Eno, and sign up here.

This article originally appeared on VentureBeat


Steph Hay, Vice President & Head of Conversation Design, Capital One

Product design leader versed in AI and machine learning. Pioneered content-first design and Lean Content testing, two low-risk methods for proving traction before building a product. Co-founded FastCustomer and Work Design Magazine. Made 1nicething.com.

Related Content