top of page

Do you need to humanise your bot?

Blog written by Sri, Anwesha & @Mahak

The era of "you ask and the bot will respond" is currently in effect. Chatbots, aka, bot(s) are computer programs designed to simulate conversation with human users, especially over the Internet” (“Chatbot”, 2022).

Over the years, these bots have proved to be instrumental in a variety of tasks, including counselling, customer service, shopping, sharing information needed, delivering amusement and now with Chat GPT and likes, they serve as professional aids to artists, engineers, researchers et al.

In fact, bots have become so embodied in our lives that one wonders whether we have really assimilated the complimenting dependency of humans on (human-like) bots: Pretend you are having a difficult day and you don't even have the energy to grab the phone to call someone. You simply say "call Mom'' into your phone / smart speaker, and you are on your call! Those are instances where your efforts are minimised, time managed more efficiently and ease of living maximised via voice activated assistants. Similarly, now think of a simple scenario like ordering food online from food delivery apps:

When you order 2 sundaes, and one is unavailable, the bot empathetically enquires, “What would you like to do with the unavailable item(s)?” . And sometimes they even suggest useful options. How do you feel when you receive such personalised and timely enquiries or suggestions? When a bot values your time and asks if you are available for the chat right now, does that make you feel a little more at ease and heighten your commitment to that conversation? Imagine receiving a "We hope you have a fantastic day ahead” from a bot post in a small chat.



You would normally receive these empathetic and personalised hospitality gestures from human beings when you are at a retail store, hotel, or on a flight. Humans showcase these socially cordial gestures in everyday interactions as well. It is in our human nature to do so, for social relationships are important for us to belong and be a part of. Nowadays however, those human-natured interaction syntaxes are extended into bot programs as well. They are so well embodied that in some instances we don’t even realise that we are talking to a bot. These human traits that you may discover in a chatbot is what we refer to as “humanization of bots”.


By humanization of social robots, we mean the effort to make robots that more closely mimic human appearance and behaviour, including the display of humanlike cognitive and emotional states. (Hashim & Yussof, 2017)

Humanisation of bots helps with enhancing customer/user and service experiences. In some cases, they also help with empathetically persuading customer / user decision-making.

Yes, humanising bots add value to the service or the business. But do all businesses and services using a chatbot feature need to humanize your bot(s)?

Let us figure out. Imagine your interaction with a bot by a bank. Say you are doing your KYC or activating or deactivating your Fastag. You choose what you are there for, which triggers a series of queries where you make a choice or share an input. And the interaction flow ends with your completion of that transaction. At the back-end, the interaction flows are often rule-based: if this, then (respond) that type of rules. If as a user you input any incomprehensible format, the bot is often programmed to respond with an error message. In contexts where clear-cut questions have clear-cut answers, such rule-based bots reduce human-errors and improve efficiency of that interaction.


But then, the goal of all and any bot programmed by humans, is to reduce human errors in all conversational contexts, simple to complex. Thereby improving user / customer journey through a seamless experience. So, beyond rules at the back-end, nowadays we experience intellectually independent or AI powered bots as well, that can handle more complex interaction flows with minimal errors.


 

Recently Insomanywords was sought to consult on improving engagement with a chatbot. The client was using a whatsapp based chatbot as a means to encourage civic participation in youth and common citizens. Civic engagement and civic participation is a complex problem. But the clients’ team have tremendously accomplished simplifying the complexity by breaking down the problem into actionable simple behaviours. And so, most of the user interactions with the chatbot involved the user making a choice using press buttons. Or, based on their choices, sometimes they will be asked to conduct a task outside the bot and provide inputs: such as taking a picture and sharing it, or sharing a location or sharing the number of potholes on your street. To help you imagine how the conversation would flow: A question will showcase choices, you as the user would select your preferred choice. For which in turn the bot asks you for something to be done outside the bot and input the same into the bot - a picture or location, say. You provide the required input. And the interaction ends with the bot providing feedback or instructions on next steps. Initially this flow did give the impression of being a rule-based bot. And it was operated as a rule-based bot. However, there were 3 big factors that differentiated the client’s civic bot from a conventional rule-based bot:

  1. Beside predefined options by the bot, the bot as well had Q&A flows triggered by keyword inputs by the user. This is a show of basic intelligence built into the bot.

  2. Two, the bot was not programmed to learn from an array of user inputs and improve its responses, ie, machine learning was not programmed into the bot. However, the team was able to trigger interaction flows or provide feedback on user input / absence of input based on some basic data analytics at the backend. Data analytics based interaction triggers were triggered manually. At the front end, the user will not know whether the personalised interaction trigger was manually or automatically done. So, from a users’ experience point, personalised interaction triggers, however minimal, were another feeble show independent intelligence.

  3. Three, the final factor was that, using the bot, the client was trying to ‘nudge’ user decision-making. Enable a civic participation habit. The bot was intended to elicit the role of a mentor and sometime coach in the users’ civic journey. This means the bot needs to be able to communicate to the user’s civic identity and ongoing actions. It needs to communicate in a timely fashion. And most interestingly, the flow sometimes needs to be conversational.

To put it more eloquently, the civic chatbot, at the backend was not completely rule-based, and at the front-end was more of a conversational bot rather than a business bot.


Bounded by the limitations of the backend, the bot was programmed to accomplish the complex task of building awareness and persuading human decision-making. Imagine yourself responding to call-to-actions by a non-humanised robotic instruction bubble, week after week. Do you see yourself responding more than a couple of times unless and until you are in dire need to? So, without question, this civic bot had to be humanised to enable sustained actions among its users.

But then, how can human-touch be programmed into a bounded bot like the civic chatbot?

Most chatbot humanisations use a few levers to achieve the result:

  • through emotional and empathetic language,

  • non-verbal behaviours (like the typing-dotted line animation),

  • making the bot reflect a personality through all its touch points, visual personality as well (through avatar or logo identity etc),

  • grounding the bot's personality, identity and responses in moral and ethical values,

  • enabling it to have a few opinions maybe, etc.

With the civic chatbot however, sophisticated levers were not applicable. After all, in the end, the responses by the bot are fed by humans in the conversation flow at the backend. The civic bot is not designed to learn and improve upon its language, emotion or empathy by itself. So we anchored to a more simplistic framework:



Visual & Identity cues included:

  • Recognisable, relatable and recallable display pic

  • Bot name and description labelled in a way that the users are able to orient their relationship with the bot. Think of any name of a person you know. And immediately your brain is associating the name with the relationship you share with them. Relationships with a bot are often nurtured over a long period of time. Until then what is the bot to you? Its name answers the q.

Conversational cues included:

Using these simple cues, and more, we humanised interactions through the civic bot. Helped users nurture a relationship with the bot. The humanisation interventions greatly increased the interaction and the bot recall value.


Drawing from this experience, it makes us think, can bots ever be non-humanised? Sure yes, there are instances where I would rather interact with a bot than a human - like when I am ordering food or simply wanting to seamlessly complete a transaction process and likes without convoluted processes and errors. However, those little touches of bot-identity, personalised greetings, empathetic responses and ciaos do elevate one’s overall interaction experience!


It was hard. But we found our verdict: yes, rule-based business bots need not have to be embedded with humanised interventions. However a sprinkle of it in the form of personalised greetings, catchy identity and empathetic good byes would add value. On the other hand, any bot in the spectrum of conversational-intelligent bots requires a shower of humanisation interventions to be able to sustain user engagement and elevate their experience.

What would be your verdict? Do share in the comments.

 

 

bottom of page