A recent demo by Google’s chief executive officer Sundar Pichai had been introducing a new and sophisticated Google AI system into practice at Google’s I/O conference on May 8th. The new AI called Google Duplex revealed that this form of AI can behave like a human taking users data to use in arranging appointment. The technology booked an appointment at a hairdressers and a table at a restaurant. Google AI did an excellent task behaving like a person, pausing in places and using its patience like a real human being. The realism when you heard usual “mmhmm” was used.
Acting like you’re taking to a real person, Google AI shocked the crowd at the I/O conference as it has been a supreme challenge for AI over decades to reason and think critically like a real person. As this was a genius idea, the recipient of the call didn’t suspect that they were talking to an AI. A clear idea but this raises the question for social and ethical issues behind Google’s great achievement.
Credit: Business Standard & Google ©
There faces concerns amongst the public concerning Google, if Google should be obliged to inform users they are taking to AI and it has the capabilities to mimic a human being. We are all aware that a lot of voice enabled devices such as Siri and Alexa can perform tasks from the sound of our voice. Would we also be able to trust AI? From the demo there was a balance between the positives that the AI was incredible to negative, if the technology is good practice and we can trust what we hear.
“It’s important to us that users and businesses have a good experience with this service, and transparency is a key part of that,”
“We want to be clear about the intent of the call so businesses understand the context. We’ll be experimenting with the right approach over the coming months.” Says Google in its blog post.
At some point users might tell if they are speaking to an AI, we have lived with bots for decades, some useful others very destructive to our cyber-security. In places a lack of natural signs such as “hmm” and “urm” may give away telltale signs that they are talking to a voice bot or not responding to something in an anticipated way.
However in the demo a clever twist to this technology showed that the Duplex AI can get around misunderstandings by repeating and rephrasing the question for the user to understand, creating a real-intelligent system however analysis show they are pre-programmed gambits. Sensing the way someone might pause the Duplex can ask “Can you hear me?”.
The Duplex is not yet complete since this was only an experiment. Duplex has only worked in lower level to everyday functionality that we can use with Siri, to book an appointment, check opening hours and make reservations at a restaurant. In case it goes wrong Google has a “self-monitoring capability” mentioned in its blog post for a person to take over when it recognises that the capabilities exceed the integrity of the AI. “In these cases, it signals to a human operator, who can complete the task,” says Google. While it has a solution to backup the AI can become fraught with issues should you need a person to intervene over the capabilities of the Duplex and like new voice enabled devices, they can be fallible if they do mis-interpret what a user says.
For more reading check out Googles Blog post