Should conversations with ChatBots require proper disclosure?


 

Written by: Avinadav Preuss


“Alexa, my love. Thy name is inflexible, but thou art otherwise a nearly perfect spouse”. This romantic statement of love came from an Amazon Echo user, E.M. Foner, to Alexa, the voice control program in the device. Foner isn’t the only one to personify Alexa; the virtual assistant received well over 250K marriage proposals in 2016. One of the reasons for the strong emotional bond many feel is due to the Echo’s impressive Natural Language Processing capabilities. This deep learning capability allows Alexa, and many more voice-activated programs, to engage in a conversation with a person in a manner that would seem to almost match human-level interaction and intelligence. This is obviously very helpful and convenient as far as Amazon’s Customers are concerned. Although the statistics about Alexa’s marriage proposals leads to the suspicion that Amazon might be tracing and recording users’ requests and orders to Alexa (A feeling shared by the Bentonville Police who recently served a warrant to Amazon to obtain Echo data that might include recordings of a murder).

chatbot.jpg


With Alexa, the user knows that they are talking to a machine. What happens when the person doesn’t know that they are actually conversing with a non-human intelligence? Chatbots, (or simply, Bots) programs which often provide the first level of response in telephonic customer service, are very common, some more sophisticated than others.  These Chatbots provide cheap, quick and efficient customer service. It is estimated that by 2020, 85% of the customers’ relationship with the enterprise will revolve around interactions with machines.  In general Bots are known to lead to enhanced user engagement, and to boost customer satisfaction.


Chatbots have been employed in other sectors as well. It was recently discovered that an online dating service was using advanced Chatbots, pretending to be human: Men who signed up for a free account, would be immediately contacted by a bot posing as a potential date, but would have to buy credits from the service to continue the conversation.


This and other potential uses raise many ethical and possibly even legal questions. Is it ethically acceptable to allow a person to converse with a Bot, thinking that he’s talking to a human, without disclosing the truth? Alternatively, does a customer have a right to know that he’s actually speaking to a Bot, especially if he gets same-level service, if not even better, than with human interaction? How should the law relate to these ethical questions? And will the law’s regulations and limitations on Bot use possibly discouraging the use of it?


As Chatbots become more pervasive in our lives, it is likely that less full disclosure is needed from an ethical perspective. At some point most of us will assume that we are talking to a machine. And, especially with the ongoing craze for multiple messaging platforms, and with the constantly improving intelligence and NLP – it would seem only natural to remove any limitation on Chatbot programs and use.