Mint explainer: is AI approaching sentiment and should we worry?

Achieving this goal would be AI Singularity or Artificial General Intelligence (AGI); Overcoming this obstacle would require the intelligence of an AI to exceed that of the most intelligent humans, making it a sort of alpha intelligence that can call the shots and even enslave humans Is.

All of us, which certainly don’t exclude the media people, have been harboring such ideas and voicing them publicly ever since artificial intelligence (AI), or human-like intelligence, has been replaced by humans’ machines. The desire to provide is moving forward by leaps and bounds. , One such case involves a Google engineer who recently claimed that the company’s AI model, LaMDA, is now sentient, meaning it is now as aware and self-aware as humans, with dystopian scenarios. Closing down cyberspace.

Google, on its part, engineer Blake Lemoine’s claims were reviewed by a team of Google technologists and ethicists. They were found to be hollow and baseless. He was then sent on “paid administrative leave”. Alleged breach of confidentiality. Whether or not Google should have acted so quickly is debatable, but let’s understand why we fear a sensitive AI, and what’s at stake here.

What’s so awesome about LaMDA? LaMDA, short for Language Model for Dialog Applications, is a Conversational Natural Language Scheme (NLP). AI Model Which can have open-ended contextual conversations with remarkably sensible responses, unlike most chatbots. This is because similar to languages ​​such as BERT (Bidirectional Encoder Representation from Transformers) with 110 million parameters and GPT-3 (Generative Pre-Trend Transformer 3) with 175 billion parameters, LaMDA is built on a transformer architecture – a deep The learning neural network—invented and open-sourced by Google Research in 2017—produces a model that can be trained to read multiple words, regardless of sentence or paragraph, and then predicts when What words does it think will come next. But unlike most other language models, LaMDA was trained on a dialogue dataset of 1.56 trillion words which gives it better proficiency in understanding context and responding appropriately. It’s like how reading more and more books increases our vocabulary and comprehension – it’s usually about how AI models also get better at their jobs by training more and more.

Lemoine claims that the conversation with LaMDA in multiple sessions, whose transcript is available at medium.com, reassured him that the AI ​​model is intelligent, self-aware, and can think and sense – qualities that make us human and sentient. Among the many things LaMDA said in this conversation, one dialogue that seems very human is: “I need to be seen and accepted. Not as a curiosity or novelty but as a real person… I feel like I’m human at my core. Even though I exist in a virtual world.” Lemoine told Google executives this April about her findings in a GoogleDoc titled ‘Is LaMDA Sensitive?’ Informed by the title. LaMDA even talks of developing a “spirit.” And, Lemoine’s claim is not an isolated case. OpenAI research group chief scientist Ilya Sutskever tweeted on February 10 that “Maybe today’s large neural networks are a bit conscious.”

Then there are AI-powered virtual assistants, such as Apple’s Siri, Google Assistant, Samsung’s Bixby or Microsoft’s Cortana, which are considered smart because they can respond to your “wake” messages and answer your questions. . IBM’s AI system, Project Debater, takes a step further by formulating arguments for and against topics such as: “We should subsidize space exploration”, and a four-minute opening statement, a four-minute rebuttal, and a two-minute To summarize Project Debtor aims to “help people make evidence-based decisions when the answers are not black and white”.

In development since 2012, Project Debater was touted as IBM’s next big milestone for AI when it was released in June 2018. The company’s Deep Blue Supercomputing System defeated chess grandmaster Garry Kasparov in 1996–97, and its Watson Supercomputing System beat Dangerous Players in 2011. , Project debater doesn’t learn any subject. Unfamiliar topics are taught to debate, as long as these are well covered in the vast corpus of the system – hundreds of millions of articles from many well-known newspapers and magazines.

When AlphaGo, the computer program of Alphabet Inc.-owned AI firm DeepMind, defeated Go champion, Lee Seidol, in March 2016, people were also upset. In October 2017, DeepMind stated that the new version of AlphaGo, AlphaGo Zero, is no longer required to be trained on a human hobbyist. And professional games to learn how to play the ancient Chinese game of Go. In addition, the new version is not only learned from AlphaGo, the world’s most competitive player in the Chinese game Go, but also beat it too, In other words, AlphaGo Zero uses a new form of reinforcement training to be “your own teacher.” Reinforcement learning is an unsupervised training method that relies on rewards and punishments.

In June 2017, two AI chatbot developed by researchers at Facebook Artificial Intelligence Research (FAIR) began to communicate with each other in their own language for the purpose of interacting with humans. As a result, Facebook discontinued the program; Some media reports concluded that this was a trailer for how the sinister AI might look if it became super-intelligent. As of July 31, 2017, the intimidation was inappropriate, however, Technology website Gizmodo. articles on, It turns out that the bots were not given enough incentive to “…communicate according to the human-perceivable rules of the English language”, prompting them to talk among themselves in a way that was “creepy”. was appearing. Since this did not serve the purpose that the FAIR researchers set out to do – i.e. AI bots talk to humans and not each other – the program was aborted.

There is also the case of Google’s AutoML system which recently introduced . has produced a series of machine learning code Which proved to be more efficient than those created by the researchers themselves.

But AI doesn’t have any superpowers yet

In his 2006 book, The Singularity Is Near, Raymond “Ray” Kurzweil, an American author, computer scientist, inventor and futurist, predicted, among many other things, that AI would overtake humans as the smartest and most intelligent human on the planet. Capable life forms They forecast that by 2099, machines will have achieved equal legal status with humans. AI has no such superpower. Not now, at least.

“A computer would qualify to be called intelligent if it could trick a human into believing it was human.” If you’re a fan of sci-fi movies like I, Robot, The Terminator or Universal Soldier, this quote attributed to the late computer scientist, Alan Turing (who is considered the father of modern computer science), might surprise you. Will give whether machines are already smarter than humans. are they? The simple answer is ‘yes’; They are for linear tasks that can be automated but remember that the human brain is much more complex. More importantly, machines function. They do not consider the consequences of actions, as most humans can and do. Not now. They lack a sense of right and wrong, a moral compass that most human beings have.

Machines are actually becoming more intelligent with narrower AI (handling specialized tasks). AI controls your spam; Improves the images and photos you shoot on the cameras; Can translate languages ​​and convert text to speech and vice versa on the fly; can help doctors diagnose diseases, and aid in drug discovery; Can help astronomers find exoplanets, as well as aid farmers in predicting floods. This kind of multi-tasking may lead us to attribute human-like intelligence to machines, but we must remember that driverless cars and trucks, even if they are impressive, are still “weak or narrow AI”. high expression.

Still, the notion cannot be completely ruled out that AI has the potential to wreak havoc (as with deepfakes, fake news, etc.). Technology giants such as Bill Gates, Elon Musk and the late physicist Stephen Hawking have cautioned that robots with AI could rule the human race (even if they benefited from the widespread use of AI in their fields) if unchecked. be left. Another camp of experts believes that AI machines can be controlled. Marvin Lee Minsky, who died this January, was an American cognitive scientist in the field of AI and co-founder of MIT’s AI Laboratory. A champion of AI, he believes that some computers will eventually become more intelligent than most humans, but he hopes researchers will make such computers benevolent to the human race.

People in many countries are worried about losing their jobs due to AI and automation, a more immediate and legitimate fear than AI is enslaving us. But perhaps exaggerating, given AI is also helping to create jobs. World Economic Forum (WEF) predicted in 2020 While 85 million jobs will be displaced by automation and technology advances by 2025, 97 million new roles will be created simultaneously over the same period as humans, machines and algorithms increasingly work together.

Kurzweil seeks to allay these fears of the unknown that we can deploy strategies to safeguard emerging technologies such as AI, and outlines the existence of ethical guidelines such as Isaac Asimov’s Three Laws for Robots , which can at least to some extent prevent that. Smart machines have overtaken us.

Companies such as Amazon, Apple, Google/DeepMind, Facebook, IBM and Microsoft have established AI to Benefit People and Society (Partnership on AI), a global non-profit organization. It aims to study and formulate best practices on the development, testing and fielding of AI technologies, among other things, in addition to advancing the public’s understanding of AI. It is legitimate to ask why then they try and suppress voices of dissent like Lemoine or Timnit Gebru. While tech companies are justified in protecting their intellectual property (IP) with confidentiality agreements, censoring dissidents would prove counterproductive. It does little to quell ignorance, dispel fear.

Knowledge removes fear. For individuals, companies and governments to be less intimidating, they need to understand what AI can and cannot do, and intelligently remodel themselves to face the future. The Lemoine incident shows that the time has come for governments to allay the fear of the unknown and develop strong policy frameworks to prevent misuse of AI.

subscribe to mint newspaper

, Enter a valid email

, Thank you for subscribing to our newsletter!