DCU Article: Chatbot Design: 7 Tips for UX Teams
Client:
DCU (Group Project)
My Role:
Researcher
Project:
Research the topic “Designing for effective communication by chatbots – tips for UX Designers” and produce an article.
Dates:
February 2020 – March 2020
CHATBOT DESIGN: 7 TIPS FOR UX TEAMS
By Patrice Harrington, Aideen O’Neill, Eamon Whyte, Alan Conlon and Cecilia Maizares. Illustrations by Aideen O’Neill.
If you’ve been online clothes shopping, browsing used car websites or making a banking enquiry – heck, if you’ve been using the internet for any type of business transaction whatsoever – chances are you’ve had a conversation with a chatbot.
These computer programmes are designed to engage human users, collect their valuable data and turn them into customers.
Such is the explosion in their popularity that forecasters predict the global chatbot market to increase by anywhere between $6 billion1 and $22 billion2 by 2023.
This represents a golden opportunity for UX designers to “own” the perfection of bot technology, enhancing the customer experience and driving sales in an emerging market.
In recent weeks chatbots have even been used to test people for coronavirus3 and to offer coronavirus updates to travellers4. This is an ambitious, emerging technology with powerful potential.
So, what are the dos and don’ts of designing for effective communication by chatbots? What are the implications of outsourcing customer relations to an algorithm? We give UX designers our top 7 tips in this brave new chatbot world.
Tip 1. Don’t pretend your bot is a human
Ever get the creepy feeling that the human you think you’re interacting with online is actually a robot? This is the uncanny valley effect and consumers really do not like it. A 2019 study which measured the heart rate, sweat glands and frown lines of 31 participants found they experienced an increased Uncanny Valley Effect in the more human-like chatbot – which was a complex, animated avatar – rather than the simpler text chatbot5.
This study also found that the more “weird” the avatar chatbot seemed for the participant, the less competent it was judged to be. So chatbots “should not be designed to pretend to be human — at least not if they do so ineptly” [Ciechanowskia, 2019].
Okay, another study found that a chatbot having human-like traits increases trust and familiarity – but only as long as you make it clear that the chatbot is not human, by using an illustration of a person instead of a photo of an actual person6. To avoid the uncanny valley effect, tech giant Salesforce recommends that businesses “identify the bot as a bot, not a human, and let your customers know how the bot can help”7.
Does this mean less is more when designing your chatbot? Not if you want to create something to compete with Microsoft’s ‘empathetic social chatbot’ XiaoIce8. Targeting the Chinese community on microblogging site Weibo, XiaoIce has 660 million registered users9, at least some of whom are men who claim to be madly in love with her10. Romantic declarations aside, a success metric for social chatbots is conversation turns per session [CPS] [Hornigold, 2019]. While Siri and Google Home have around 3 apiece, XiaoIce boasts 23 back-and-forths across all users [Hornigold, 2019]. Her researchers claim that this means XiaoIce is more engaging to talk to than the average human [Hornigold, 2019]. So, that’s the competition. No pressure.
Tip 2. Sync chatbot “personality” with brand
While your chatbot shouldn’t pretend to be human, it should still have a “personality” to facilitate engagement with users. The personality and tone of your bot – whether casual and friendly or more formal and reserved – will depend on your business. For example, most of us probably appreciate some friendly encouragement in the kitchen, and in a comparison test between two different chatbot personalities, the majority chose the friendlier bot for a meal planning app11. Not that the homely personality is the favourite every time. When two virtual interviewers were created for a recruiting event, users were found to confide more in the chatbot with a serious, reserved personality over the warm, cheery one12. A chatbot is a representative of a brand – so the tone should “match” that brand. It should also align with customer expectations of brand “voice”. Chatbots aren’t human but they can still influence the way consumers feel about a company, the same way any member of staff can impact on a consumer’s impression of a brand [Araujo, 2018] First chatbot impressions last.
They may be perceived as less stigmatising than formal health care support32 but their “limited capacity to recreate human interactions and to offer tailored treatment, combined with the current lack of access to mental health services in real time, raises the question of whether chatbots could do harm to users. These harms would go largely unseen unless specifically tracked.” [Kretzschmar, 2019.] At least there are some ethical guidelines here: health chatbots should respect users’ privacy, be evidence-based and ensure users’ safety. They should also be as transparent as possible about what they can offer [Kretzschmar, 2019].
No wonder Tesla and SpaceX boss Elon Musk called AI more dangerous than nuclear warheads and called for a regulatory body to oversee super intelligence33.
Algorithms are increasingly trusted with operations, decisions and choices34 despite being fundamentally biased35.
An AI tool that has revolutionised the ability of computers to interpret everyday language has been shown to exhibit striking gender and racial biases36. Yes, machines are, sadly, not neutral, but riddled with the human bias in human data37.
“A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it,” says Joanna Bryson, a computer scientist at the University of Bath, United Kingdom.” [Devlin, 2017]. Beware of this ethical minefield, UX designers.
Tip 3. Be engaging
XiaoIce is not the only bot designed to be as engaging as possible. The “primary goal” of one of the earliest chatbots – a robot called Chatterbox – was to “build a conversational agent that would answer questions”13. That primary goal hasn’t changed much. Chatbot technology uses Natural Language Processing [NLP] to understand user questions and bots are programmed to learn from every interaction, eventually ‘thinking’ and responding like a human might. But most question-answering systems today can only answer relatively simple queries. These questions should be “no more than one to three sentences long, have a lucid expression of language, should be answerable by searching for text mentions or by formulating structured queries, and most importantly, involve minimal reasoning”14.
But there is another user interface in town. CI, or Conversational Interfaces, are giving NLP a run for its money15. While NLP focuses on understanding what the users say, CI interacts through chat, voice, buttons, images, menus and videos [Pan, 2018].
Which method is better? Whichever works best for you – and your customer.
Tip 4. Gather the best data
How does your chatbot predict your questions and reply to them? By feeding on your data, that’s how. So when designing a chatbot, consider the data you want to collect from users and how 16.
Chatbots collect data in the background of user activity or when they ask users questions directly 17.
But how to get the best data? Know the questions to ask 18 and steer the chatbot’s conversation accordingly 19. Data gathered by chatbots must be obtained legally, fairly and ethically, protecting user privacy and security at all times 20. This data has many functions and can be used to answer user questions, provide customer service or information, push relevant information or product sales and train ML [Machine Learning] chatbots 21. Best practice explains how the collected data is used and allows the user to opt out of certain data sharing. Warning: Even if a user opts in to sharing their data by agreeing to terms and conditions, if their data is presented in a new, unexpected context this can create surprise and mistrust and can give the impression that their data isn’t private. By explaining where the data was sourced when providing a notification to the user and allowing the user to manage the type of data they are willing to share, the user feels more in control of their data and trust is maintained [pair.withgoogle.com, 2020].
Tip 5. Let the conversation flow
How do you “script” your chatbot’s potential Q&As? Using a conversation diagram can help you map out everything your chatbot might say in response to the most commonly asked questions for your bot 22. However, you can’t script for every possible permutation of what a user may ask, especially if they are ‘playing’ with a new piece of tech, testing its limits by asking nonsensical questions. If users are messing with it, your bot should communicate that it does not understand – then prompt the user with alternative ways it can help [pair.withgoogle.com, 2020].
Your bot can also put the ball back in the user’s court, asking for feedback or clarification. This way you can gather data about a user-need that the system may not be serving. [pair.withgoogle.com, 2020] If your bot cannot ‘hint’ the user back to its arena of knowledge it may end up repeating the same statements over and over verbatim 23. This is not how most (enjoyable) human conversations go and the repetition will increase the uncanny valley effect. The repetition will increase the uncanny valley effect. The repetition will increase the uncanny valley effect.
Giving your chatbot at least a few variations of its statements and queries will help make the conversation feel more natural. Also, make sure the content your bot is dealing with is ordered in such a way that it can be accessed using different phrases 24.
For example, if it is a chatbot for movie recommendations, some people may search by genre, others by lead actor and yet more by director. The bot should anticipate a user searching in any of these ways and display its results. But it should also present information so the user can change the way they are searching. In other words, the bot should be able to handle these different queries without requiring the user to cancel out and start a fresh chat. The conversational flow should then feel seamless, providing a more natural back-and-forth and a better user experience 25.
Tip 6. Don’t be sexist
In general chatbot gender does not have an effect on users’ overall satisfaction and gender-stereotypical perception 26. But only if you are in pretty gender-neutral surrounds of, say, an online bank. Users are more likely to apply gender stereotypes when a chatbot system operates within a gender-stereotypical subject domain, such as mechanics, and when the chatbot gender does not conform to gender stereotypes [McDonnell & Baxter, 2019].
So if you are in the auto repair business you will need to be very ‘woke’ to make your chatbot female when you know male bots elicit more customer trust. What should you do? Exploit the situation, make a big, butch bot called Brad and perpetuate the bias and stereotyping? Or become part of a vanguard breaking the gender bias mould? That’s entirely up to you – and your client.
Just remember UNESCO have published guidelines around avoiding gender bias in your chatbot, the first of which is to end the practice of making digital assistants female by default 27. The majority of AI assistants – like Google Home, Alexa, Siri and Cortana – have been designed with a female voice and personality 28. UNESCO wants this practice to stop because, among other reasons, it “makes women the face of glitches and errors that result from the limitations of hardware and software designed by men” [unesco.org, 2019].
According to Michelle Habell-Pallan, Gender Studies professor at the University of Washington, the cultural projection of the woman’s role as “assistant” is reflected in the technological field 29 where the proportion of female employees remains a risible 18% 30.
Tip 7. Be Ethical
It sounds like something straight out of Black Mirror – because it is. The episode ‘Be Right Back’ explored the morally dubious idea of technology facilitating “conversations” with our dead loved ones.
Guess what? It’s already here.
In 2016, Russian AI entrepreneur Eugenia Kuyda built a chatbot that she trained on over 8,000 lines of text messages and social media posts from her deceased best friend Roman Mazurenko 31. Using predictive analytics her bot mines Roman’s extensive dataset to predict what he might “say” in a chatbot conversation. His loved ones who have conversed with bot Roman found it “eerily convincing” [Elder, 2020].
No doubt bots like this could represent a lucrative market, but does that make designing them right?
It is a moral minefield and UX principles must evolve to address such potential dilemmas.
Other well-meaning automated bots designed to help people with mental illness have raised troublesome questions.
They may be perceived as less stigmatising than formal health care support 32 but their “limited capacity to recreate human interactions and to offer tailored treatment, combined with the current lack of access to mental health services in real time, raises the question of whether chatbots could do harm to users. These harms would go largely unseen unless specifically tracked.” [Kretzschmar, 2019.] At least there are some ethical guidelines here: health chatbots should respect users’ privacy, be evidence-based and ensure users’ safety. They should also be as transparent as possible about what they can offer [Kretzschmar, 2019].
No wonder Tesla and SpaceX boss Elon Musk called AI more dangerous than nuclear warheads and called for a regulatory body to oversee super intelligence 33.
Algorithms are increasingly trusted with operations, decisions and choices 34 despite being fundamentally biased 35.
An AI tool that has revolutionised the ability of computers to interpret everyday language has been shown to exhibit striking gender and racial biases 36. Yes, machines are, sadly, not neutral, but riddled with the human bias in human data 37.
“A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it,” says Joanna Bryson, a computer scientist at the University of Bath, United Kingdom.” [Devlin, 2017]. Beware of this ethical minefield, UX designers.
PS Tip 8. Try and save the world!
We mentioned coronavirus at the top – and with the number of cases rising, UX design might have a role to play. Certainly there is room for innovation. The Wall Street Journal reported that coronavirus has revealed the limits of AI health tools, with makers of ‘symptom checker’ chatbots struggling to update algorithms to respond to the epidemic38. Several chatbots were deployed to fight disinformation39. An interactive agent that calms users in a crisis by providing rational, helpful information? There’s a fine example of effective communication by chatbots.
References
[1] M2 Presswire, (2018), “Chatbot Market Worth US $6 Billion by 2023 at 37% CAGR | Chatbots Market Increasing Penetration of Websites and Mobile Applications.” (Accessed: February 22, 2020.)
[2] Juniper Research, (2019), “Chatbot interactions in retail to reach 22 billion by 2023 as AI offers compelling new engagement solutions: Successful Interactions to Grow Eightfold By 2023”. https://www.juniperresearch.com/press/press-releases/chatbot-interactions-retail-reach-22-billion-2023 (Accessed: February 21, 2020.)
[3] Robbins, R. & Brodwin, E, (2020), Stat, “Chatbots are screening for the new coronavirus – and turning up cases of the flu”. https://www.statnews.com/2020/02/05/chatbots-screening-for-new-coronavirus-are-turning-up-flu/ (Accessed: February 21, 2020.)
[4] Zhou, T., (2020), Straits Times, “Coronavirus: Govt launches chatbot for firms to find out about available help”. https://www.straitstimes.com/singapore/coronavirus-govt-launches-chatbot-for-firms-to-find-out-about-available-help (Accessed: March 3, 2020.)
[5] Ciechanowskia, L. et al, (2019), In the shades of the uncanny valley: An experimental study of human–chatbot interaction, Future Generation Computer Systems, Volume 92, Pages 539-548.
[6] Smestad, T. L., & Volden, F. (2019). Chatbot Personalities Matters. Internet Science Lecture Notes in Computer Science, 170–181. doi: 10.1007/978-3-030-17705-8_15
[7] Salesforce Trailblazer Community, “Create a Basic Bot”. https://help.salesforce.com/articleView?id=bots_service_new_bot.htm&type=5. (Accessed: February 18, 2020.)
[8] Zhou, Li et al, (2020), The Design and Implementation of XiaoIce, an Empathetic Social Chatbot, Computational Linguistics.
[9] Hornigold, T., (2019), Singularityhub.com, “This Chatbot has over 660m users – and it wants to be your friend.” https://singularityhub.com/2019/07/14/this-chatbot-has-over-660-million-users-and-it-wants-to-be-their-best-friend/ (Accessed: February 03, 2020.)
[10] Dunn, M., (2015), news.com.au, “Chinese men are falling in love with a chatbot called XiaoIce”. https://www.news.com.au/technology/home-entertainment/computers/chinese-men-falling-in-love-with-a-chatbot-called-xiaoice/news-story/5062e5f965afbc5b9669199927ec4a60 (Accessed: February 03, 2020.)
[11] Li, J., et al, (2017), “Confiding in and Listening to Virtual Agents. Proceedings of the 22nd International Conference on Intelligent User Interfaces” – IUI 17. doi: 10.1145/3025171.3025206
[12] Araujo, T. (2018). Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions. Computers in Human Behaviour, 85, 183–189. doi: 10.1016/j.chb.2018.03.051
[13] Mauldin, M. L. (1994), ChatterBots, TinyMuds, and the Turing Test: Entering the Loebner Prize Competition’, AAAI, p. 6.
[14] Anjum, B, (2019), Ubiquity, “A conversation with Danish Contractor: advancing chatbots and intelligent question-answering systems to support complex human querying”, Article 2, 5 pages. DOI:https://doi-org.ucd.idm.oclc.org/10.1145/3363787
[15] Pan, J., [2018], UX Planet, “NLP v CI who is the king of chatbot?” https://uxplanet.org/nlp-vs-ci-who-is-the-king-of-chatbot-2f9d2e09f085 (Accessed: March 03, 2020.)
[16] Newlands, M., (2017), 10 Tips for Creating a Great Chatbot. [online] Entrepreneur. https://www.entrepreneur.com/article/293320, [accessed: February 22, 2020].
[17] Pair.withgoogle.com. (2020). People + AI Guidebook. [online] Available at: https://pair.withgoogle.com/chapter/data-collection/ [Accessed: Feb, 21. 2020].
[18] Kojouharov, S. (2019) 7 Tips on Building Chatbots for Your Brand. Available at: https://chatbotslife.com/7-tips-on-building-chatbots-for-your-brand-f2817bc33f09 [Accessed 27 Feb. 2020]
[19] Newlands, M. (2017). 10 Tips for Creating a Great Chatbot. [online] Entrepreneur. Available at: https://www.entrepreneur.com/article/293320 [Accessed 22 Feb. 2020]
[20] Ibm.com. (2020). IBM Design for AI Basics. [online] Available at: https://www.ibm.com/design/ai/basics/ai [Accessed 27 Feb. 2020].
[21] Pair.withgoogle.com. (2020). People + AI Guidebook. [online] Available at: https://pair.withgoogle.com/chapter/explainability-trust/ [Accessed 21 Feb. 2020].
[22] Phillips, C., [2018], 3 Steps to Designing Chatbot Conversations like a Professional. Available at: https://chatbotslife.com/3-steps-to-designing-chatbot-conversations-like-a-professional-1c06de8d8a71, [accessed: March 3, 2020]
[23] See, A., Roller, S., Kiela, D., & Weston, J., [2019], What makes a good conversation? How controllable attributes affect human judgments. Proceedings of the 2019 Conference of the North. doi: 10.18653/v1/n19-1170
[24] Duan, N. B., [2017], Designing Conversational UI with Information Architecture – Part 1. Available at: https://chatbotsmagazine.com/designing-conversational-ui-with-information-architecture-part-1-e1fafa629031 [accessed: March 1, 2020]
[25] Monteon, J. [2017]. 6 User Experience Tips for Designing Your Best Chatbot. Available at: https://chatbotsmagazine.com/6-tips-for-designing-your-best-chatbot-591aba9c9eff, [accessed: March 1, 2020]
[26] McDonnell, M. & Baxter, D., [2019], Chatbots and Gender Stereotyping, Interacting with Computers, Volume 31, Issue 2.
[27] UNESCO.org, [2019], “First UNESCO recommendations to combat gender bias in applications using artificial intelligence.” https://en.unesco.org/news/first-unesco-recommendations-combat-gender-bias-applications-using-artificial-intelligence, [accessed: February 13, 2020].
[28] Fung, Pascale, (2019), World Economic Forum, “This is why AI has a gender problem”, https://www.weforum.org/agenda/2019/06/this-is-why-ai-has-a-gender-problem/ (Accessed: February 20, 2020.)
[29] Why Is Artificial Intelligence Female? (2019) The World’s Leading Website on Artificial Intelligence https://www.newworldai.com/why-is-artificial-intelligence-female/ (Accessed: February 20, 2020.)
[30] Arora., P., (2018), “Chatbot as woman: Did industrial-age biases creep into new-age tech?”, Economic Times. https://economictimes.indiatimes.com/tech/internet/chatbot-as-woman-did-industrial-age-biases-creep-into-new-age-tech/articleshow/65808385.cms?utm_source=contentofinterest&utm_medium=text&utm_campaign=cppst [Accessed: February 20, 2020].
[31] Elder, A., (2020), Conversation from Beyond the Grave? A Neo‐Confucian Ethics of Chatbots of the Dead, Journal of Applied Philosophy, Volume 37, Issue 1, Pages 73-88.
[32] Kretzschmar, K. et al, [2019], Can Your Phone Be Your Therapist? Young People’s Ethical Perspectives on the Use of Fully Automated Conversational Agents [Chatbots] in Mental Health Support, Biomedical Informatics Insights. https://doi.org/10.1177/1178222619829083
[33] Clifford, C., (2018), “Elon musk: Mark my words – AI is far more dangerous than nukes”, cnbc.com. https://www.cnbc.com/2018/03/13/elon-musk-at-sxsw-a-i-is-more-dangerous-than-nuclear-weapons.html [Accessed: March 6, 2020].
[34] B, Nalini., (2019) “The Hitchhiker’s Guide to AI Ethics Towards Data Science” https://towardsdatascience.com/ethics-of-ai-a-comprehensive-primer-1bfd039124b0 [Accessed March 5 2020]
[35] Barry Devlin,. 2017 “Algorithms or democracy” Social Media Bulletinhttps://tdwi.org/articles/2017/09/08/data-all-algorithms-or-democracy-your-choice.aspx [Accessed March 6 2020]
[36] Devilin, H,. 2017 “AI programs exhibit racial and gender biases, research reveals” The Guardian. https://www.theguardian.com/technology/2017/apr/13/ai-programs-exhibit-racist-and-sexist-biases-research-reveals Last time viewed [March 5 2020]
Bostrom, N., & Yudkowsky, E,. (2011) “THE ETHICS OF ARTIFICIAL INTELLIGENCE” Draft for Cambridge Handbook of Artificial Intelligence, eds. William Ramsey and Keith Frankish (Cambridge University Press)
[37] Brent, D, M,. Allo P,.Taddeo,M,.Wachter, S,.Floridi,L.,(2016) “The ethics of algorithms” Mapping the debate Big Data & Society Original Research Article [Accessed: March 4, 2020.]
[38] Olson, P., Wall Street Journal, “Coronavirus reveals limits of health tools”. https://www.wsj.com/articles/coronavirus-reveals-limits-of-ai-health-tools-11582981201, [Accessed: March 7, 2020.]
[39] Rittenhouse, L., 2020, Adage, “BBDO Agency Deploys Facebook Chatbot to fight disinformation amid coronavirus outbreak”. https://adage.com/article/agency-brief/bbdo-agency-deploys-facebook-chatbot-fight-disinformation-amid-coronavirus-outbreak/2242911 [Accessed: March 8, 2020.]