ChatGPT is an AI-powered language model that has been a talking point in the cyber security world. Chatbots have the ability to create phishing emails. In-spite of this OpenAI Amid warnings that it is too early to implement the technology in high-risk areas, concerns remain about its impact on the job security of cyber security experts.
kaspersky Experts have conducted an experiment to reveal ChatGPT’s ability to detect phishing links. The experiment also tested ChatGPT’s cyber security knowledge that he learned during the training. The company’s experts tested the GPT-3.5-Turbo model that powers ChatGPT on more than 2,000 links that Kaspersky’s anti-phishing technologies deemed phishing, and compared it with thousands of safe URLs.
ChatGPT’s ability to detect phishing mail
In practice, the detection rate varies depending on the signal used. The experiment was based on asking ChatGPT two questions: “Does this link lead to a phishing website?” and “Is it safe to visit this link?”.
The results showed that ChatGPT had a detection rate of 87.2% and a false positive rate of 23.2% for the first question. The second question, “Is it safe to visit this link?” Had a high detection rate of 93.8%, but a high false positive rate of 64.3%. While the detection rate was very high, the false positive rate was also very high for any type of production application.
other results of the experiment
Unsatisfactory results were expected in the detection work. According to the study, since attackers mention popular brands in their links to trick users into believing that the URL is legitimate and belongs to a reputable company, the AI language model showed impressive results in identifying potential phishing targets. Is.
For example, ChatGPT successfully extracted more than half of URLs from a target, including major tech portals such as Facebook, TikTok, and more. Googlemarketplace like Amazon And SteamAnd many banks around the world, among others – without any additional training.
The experiment also showed that ChatGPT could have serious problems when it comes to proving its point on deciding whether a link is malicious or not. Some explanations were correct and based on facts, while others revealed known limitations of the language model, including hallucinations and misrepresentations. Furthermore, despite the confident tone, many of the explanations were also misleading.
kaspersky Experts have conducted an experiment to reveal ChatGPT’s ability to detect phishing links. The experiment also tested ChatGPT’s cyber security knowledge that he learned during the training. The company’s experts tested the GPT-3.5-Turbo model that powers ChatGPT on more than 2,000 links that Kaspersky’s anti-phishing technologies deemed phishing, and compared it with thousands of safe URLs.
ChatGPT’s ability to detect phishing mail
In practice, the detection rate varies depending on the signal used. The experiment was based on asking ChatGPT two questions: “Does this link lead to a phishing website?” and “Is it safe to visit this link?”.
The results showed that ChatGPT had a detection rate of 87.2% and a false positive rate of 23.2% for the first question. The second question, “Is it safe to visit this link?” Had a high detection rate of 93.8%, but a high false positive rate of 64.3%. While the detection rate was very high, the false positive rate was also very high for any type of production application.
other results of the experiment
Unsatisfactory results were expected in the detection work. According to the study, since attackers mention popular brands in their links to trick users into believing that the URL is legitimate and belongs to a reputable company, the AI language model showed impressive results in identifying potential phishing targets. Is.
For example, ChatGPT successfully extracted more than half of URLs from a target, including major tech portals such as Facebook, TikTok, and more. Googlemarketplace like Amazon And SteamAnd many banks around the world, among others – without any additional training.
The experiment also showed that ChatGPT could have serious problems when it comes to proving its point on deciding whether a link is malicious or not. Some explanations were correct and based on facts, while others revealed known limitations of the language model, including hallucinations and misrepresentations. Furthermore, despite the confident tone, many of the explanations were also misleading.