AI is being used for hacking and spreading misinformation, warns top cyber security official

published by, Shaurya Sharma

Last Update: July 21, 2023, 09:55 AM IST

Washington DC, United States of America (USA)

Cybersecurity researchers have demonstrated a variety of potentially malicious use cases.

Cybersecurity researchers have demonstrated a variety of potentially malicious use cases.

Hackers and propagandists are using artificial intelligence to create malicious software, write credible phishing emails, and spread disinformation online.

Hackers and propagandists are using artificial intelligence (AI) to create malicious software, draft credible phishing emails and spread disinformation online, Canada’s top cyber security official told Reuters, in what is preliminary evidence that Cyber ​​criminals have also embraced the technological revolution coming in Silicon Valley.

In an interview this week, Sami Khoury, head of the Canadian Center for Cyber ​​Security, said his agency has seen AI being used “in phishing emails, or in crafting emails in a more focused way, with malicious code (and) misinformation and disinformation.”

Khoury didn’t provide details or evidence, but his claim that cybercriminals were already using AI adds an urgent note to the tone of concern over the use of emerging technology by rogue actors.

In recent months several cyber watchdog groups have published reports warning about the perceived risks of AI – particularly fast-moving language processing programs known as large language models (LLMs), which use massive amounts of text to produce convincing-sounding dialogue, documents and more.

In March, the European police organization Europol published a report stating that models such as OpenAI’s ChatGPT made it possible to “impersonate an organization or individual in a highly realistic manner, even with only a basic understanding of the English language”. In the same month, Britain’s National Cyber ​​Security Center said in a blog post that there was a risk that criminals “could use LLM to help with cyber attacks beyond their current capabilities.”

Cybersecurity researchers have demonstrated a variety of potentially malicious use cases and some now say they are starting to see suspicious AI-generated content in the wild. Last week, a former hacker said he discovered an LLM trained on malicious content and asked him to draft a concerted effort to dupe someone into making a cash transfer.

LLM replied with a three paragraph email asking for help with an immediate invoice from his target.

“I understand this may be short notice, but this payment is incredibly important and must be made within the next 24 hours,” said the LL.M.

Khoury said that while the use of AI to draft malicious code was still in its early stages — “there’s still a way to go because it takes a lot to write a good exploit” — the concern was that AI models were evolving so rapidly that it was difficult to contain their malicious potential before being released into the wild.

“Who knows what is going to happen near,” he said.

(This story has not been edited by News18 staff and is published from a syndicated news agency feed – reuters,