Lt. Commander Data was a popular fictional character in Star Trek: The Next Generation, a series that debuted in 1987. He served as the second officer on the Starship Enterprise, as it travelled the universe in search of new civilisations. Capt. Picard relied on him for his insights, ability to process large sums of information nearly instantly, as also a neutral response that was bereft of emotion. In fact, a large part of the character was based around his self-aware desire to embrace emotion. Even while dreaming up science fiction, the writers then realised that while we may be able to build a machine that can process data and respond to us in a conversational manner, but cracking the emotional quotient would be a challenge.
While a number of technological ideas that were imagined in the original Star Trek series have become real in recent years, an android such as Data still eludes humanity. The idea of artificial intelligence isn’t new, we’ve seen it in fiction many a time – the hit children’s show Small Wonder, C-3PO from Star Wars, HAL 9000 from 2001: A Space Odyssey and recently in Samantha from Her are all examples of AI based characters winning audiences over.
Language and Context
The fundamental challenge that any machine that attempts to communicate conversationally faces is understanding context. Our speech is already replete with irony and emotion, but when you add dialects, slang and subtleties, understanding us becomes incredibly hard. When Siri was first released five years ago, the potential seemed huge, however we still only use it first certain basic tasks such as setting alarms and checking the weather. While Apple decided on giving its assistant some personality, Google chose a more businesslike approach in trying to answer queries without the element of humour. Over the years smart assistance like Siri, Cortana and Google’s offering have all evolved to understand some sort of context as well. For example, if I ask them ‘What is the capital of India?’ they will reply with ‘New Delhi’. If I follow that question up with something like ‘How’s the weather there?’, they’ll understand that I’m still talking about Delhi and give me the correct information. This is an incredible amount of progress, and yet to the end user this alone does not seem like much.
The real challenge in building something that works like a human brain, is simply that we know so little about our mind in the first place. While we understand that we have to think about answers to certain questions, there are some things that the brain does on pure instinct. How it does that, is something that researchers are still trying to figure out. Therefore, till the time we can understand that and replicate it for a machine, we will have to live with the idea of different kinds of AI.
Kinds of A.I.
A simple form of a Stage I machine would be a computer that is aimed at beating a human in a game of chess. But since it is only focused on one thing and one thing alone, and does not have to deal with language and only operate within a set of rules that the game carries, it is still relatively primitive. The next stage would be when AI would be able to understand speech, just as any other human would. This is when the application would be able to pass the Turing Test, developed in 1950. All that the test measures is a machine’s ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. Modern voice assistants are quite some way away from this threshold as they rely on pre-fed information for a lot of tasks such as narrating jokes, and answering queries such as ‘What is the meaning of life?’. The final level would be achieved when machines can no longer just understand humans, but also use their superior learning and processing ability to vastly outthink us.
Experiments in developing artificial intelligence are also throwing up certain ideological and cultural philosophies. I am not talking about the notion where machines will one day rule us, but a more current problem that we might want to address.
In 2016 Microsoft debuted an A.I. powered bot Tay, that was supposed to engage with young people between 18 and 24 and learn conversational understanding. While it relied on its own database of information, it would also attempt to learn from the responses and tweets that it was receiving. Within 24 hours, that bot went from being an innocent do-gooder like Lt. Cmdr. Data to a bigoted homophobic monster. Its responses became sexist, racist and offensive. The situation got so out of hand that Microsoft killed it the same day, and went to work on something better.
Microsoft’s approach in this case was to give a free hand to the software, whereas applications like Siri and Cortana are programmed to be non-offensive. You cannot engage with them on subjects such as politics, religion and even if you talk nasty to them, they do not answer in the same tone. In fact, a good pastime for a lot of people is to abuse Siri and marvel at its hilarious admonishments.
Despite all the challenges and limitations, we are inching towards a world where machines understand us better, and let us accomplish a multitude of tasks without getting up from the couch. While Amazon’s Alexa today lets you shout across the room and order a bottle of shampoo with just your voice, a day may be coming where it also overhears you plotting against your spouse, and tells on you.
IMAGE SOURCE: AMAZON PRESS CENTER
Comments are closed.