UK Prime Minister Rishi Sunak has an AI ‘warning’ for humanity – Times of India

UK Prime Minister Rishi Sunak has warned humanity of the risks of artificial intelligence (AI), saying that it is important to develop and deploy AI responsibly in order to avoid “losing control” of the technology.
In a speech at the Royal Society, the UK’s academy of sciences, on Thursday, Sunak said that AI has the potential to transform life for the better, but that it is important to be aware of the potential risks, such as the use of AI for malicious purposes or the development of AI that is beyond human control.“In the most unlikely but extreme cases, there is even the risk that humanity could lose control of AI completely through the kind of AI sometimes referred to as super intelligence,” Sunak said in his speech.
Sunak said that AI could be used to develop weapons with more ease than before. He also said that the exploitation of AI by criminals for cyber-attacks, disinformation, fraud, or even child sexual abuse is a real possibility.
UK to spend billions on supercomputers, quantum computers
He also announced that the UK would be setting up the world’s first AI safety institute, which would be responsible for testing new types of AI for a range of risks. “It will advance the world’s knowledge of AI safety and it will carefully examine, evaluate, and test new types of AI, so that we understand what each new model is capable of; exploring all the risks, from social harms like bias and misinformation, through to the most extreme risks of all,” he said. The UK PM also said that the country will invest billions in supercomputers and quantum computers.
Sunak’s warning about the risks of AI comes at a time when the technology is developing rapidly. AI is already being used in a wide range of applications, from self-driving cars to facial recognition software. However, as AI becomes more powerful and sophisticated, so do the potential risks. Google, Microsoft, OpenAI all have warned about the use of AI and stressed on using it responsibly.