Written by Tanya Newhouse
CEO at Clevertar
I’m often asked whether a Clevertar digital assistant (chatbot) has to be female.
It’s interesting to note that although we’ve created a range of characters of different genders from diverse backgrounds to be utilised by our clients for their customers, right now, all of our ‘live’ Clevertars are indeed, female.
Why might this be so?
The short answer is that popular digital assistants, such as SIRI, Alexa, or the Sat-Nav are all female by default, and familiarity creates an affinity to female chatbots in general. Our clients know that their customers will feel comfortable with a female character because they’re simply already used to it.
But why are these digital assistants female – or even gendered at all?
Once again, there is a short answer; because the audible synthetic voice is female and therefore it represents a female-gendered chatbot. However, we know that digital assistants are more than an audible voice alone. Chatbot responses to user interactions are infused with personality so that users can more naturally engage in conversation. This is especially noticeable in programmed responses and small-talk. For example, an ‘advisor’ chatbot should also sound authoritative and trustworthy by the language used.
In this way, a chatbot’s communication style and attitude, together with its purpose and functionality, can also evoke gender.
However, this is a link that has dismayed researchers who have explored the issue through the lens of gender equity. Given that digital assistants specialise in helpful, though menial work such as sending a text, deliberately evoking a female character seems to express unconscious bias. Last year, UNESCO co-wrote a report asking whether ‘Robots are sexist’ given their apparently subservient function and submissive tone. Just last month, Australian researchers, Yolande Strengers and Jenny Kennedy, wrote that “This digital feminised workforce provides continual reinforcement of an age-old assumption that a woman’s place is in the home, taking care of the occupants’ physical, mental, emotional, and health needs”.
My own dilemma therefore – as a feminist and participant in the chatbot industry – is how to reinforce gender diversity while making AI technology readily accessible to customers.
I’d like to start by challenging the assumption that users automatically link female chatbots with ‘assistance’ or ’caring’ conversations. In fact, in our experience, users are open-minded about the gender of their chatbot assistant.
We have been able to see this openness first hand.
Clevertar has developed a number of digital health programs for our clients in areas such as heart failure, mental health, and pain management in which a Clevertar chatbot becomes a personal coach and companion to the user – clearly taking on a ‘caring’ role. Importantly, we were able to offer a choice of character to customers. On sign-up, users were randomly presented with one of six diverse characters, and although they were able to change at any time, most people simply kept their default. And since content was standard across all characters, this randomisation gave us the ideal opportunity to test engagement with non-typical digital assistants.
From these tests, I was delighted to find that the chatbot’s gender made no difference to the level of user engagement with the health program. It may have even enhanced it. One user reported back “I finished my coaching sessions and reviewed the material for some time after completion to keep me on track. This is going to sound crazy, but I miss Ethan – my coach – and his funny little mannerisms. When I’m practicing the skills I learned, I often recall his face and words.”
We concluded that there is far less risk in choosing a male (or racially diverse) digital assistant than first thought. Quality content and the functionality that enables users to achieve their goals is far more relevant and important for user satisfaction. As well, tone of voice and language does not need to be reinforced by reference to gender. If say, Luke was ‘authoritative and calming’ like Zoe, SA Health’s Covid-19 Assistant, he would be similarly acceptable. Users just don’t care that much about gender when they get the answer they’re looking for, although they do look for someone they like.
I’m therefore optimistic about a non-biassed technology future. However, breaking new ground does need leadership, so I’d love to hear from you if diversity in the chatbot world is something that you’d like to explore further.