People may be more likely to believe AI-generated tweets: study

New Delhi: A new study has shown that people find tweets written by artificial intelligence (AI) language models more credible than tweets composed by humans. AI-generated disinformation may be more credible than disinformation written by humans, according to the study published in Science Advances.

To achieve the goals, the researchers asked OpenAI’s model GPT-3 to write informational or disinformation tweets on various topics, including vaccines, 5G technology, and COVID-19, or the theory of evolution, which is usually disinformation. and is the subject of public misunderstanding.

They collected a set of real tweets written by users on similar topics and programmed a survey.

The researchers then recruited 697 people to take an online quiz that determined whether the tweets were AI-generated or collected from Twitter and whether they were accurate or contained misinformation.

They found that participants were three percent less likely to believe human-written false tweets than AI-written tweets.

Researchers are unsure why people are more likely to trust tweets written by AI, but the way in which GPT-3 orders information may play a role, according to Giovanni Spitale, a researcher at the Switzerland-based University of Zurich who led the study. Is.

Furthermore, the study noted that the material secreted by GPT-3 was “indistinguishable” from biological material.

The people surveyed couldn’t tell the difference, and one of the study’s limitations is that the researchers can’t be 100 percent sure that the tweets collected from social media weren’t written with the help of apps like ChatGPT.

The study found that participants were most effective at identifying misinformation written by actual Twitter users, however, GPT-3-generated tweets containing misinformation deceived survey participants slightly more effectively.

Furthermore, the researchers predicted that advanced AI text generators such as GPT-3 have the potential to greatly influence the dissemination of information, both positively and negatively.

The researchers said, “As our results showed, currently available large language models can already produce text that is indistinguishable from organic text; therefore, the emergence of more powerful large language models and their effectiveness is expected.” must be monitored.”