Threat of deepfake pornography looms amid rising competition in AI development

Artificial intelligence imaging can be used to create art, try on clothes in virtual fitting rooms, or help design advertising campaigns.

But experts fear that the darker side of the easily accessible tools could worsen what primarily harms women: non-consensual deepfake pornography.

Deepfakes are videos and images that are digitally created or altered by artificial intelligence or machine learning. Porn created using the technology began spreading on the Internet several years ago when a Reddit user shared clips in which the faces of female celebrities were superimposed on the shoulders of porn actors.

Since then, deepfake creators have circulated similar videos and images targeting online influencers, journalists and others with public profiles. There are thousands of videos on many websites. And some are offering users the opportunity to create their own images — essentially allowing anyone to turn into sexual fantasies without their consent, or use the technology to harm former partners.

Experts say the problem grew as it became easier to create sophisticated and visually appealing deepfakes. And they say it could get worse with the development of generative AI tools that are trained on billions of images from across the internet and use existing data to spit out novel content.

“The reality is that technology will continue to proliferate, will continue to evolve and will continue to be as easy as pressing a button,” said Adam Dodge, founder of EndTab, a group that provides training on technology-enabled abuse. , “And until that happens, people will undoubtedly … continue to abuse that technology to harm others, primarily through online sexual violence, deepfake pornography and fake nude images.”

Noel Martin of Perth, Australia, has experienced that reality. The 28-year-old discovered her own deepfake porn 10 years ago when one day out of curiosity she used Google to find an image of herself. To this day, Martin says she does not know who created the fake images, or videos of them engaging in sexual intercourse, that she would later find. He suspects that someone has taken the picture posted on his social media page or elsewhere and made it obscene.

Horrified, Martin contacted various websites over the years in an attempt to have the images removed. Some didn’t answer. Others took it down but he soon rediscovered it.

“You can’t win,” said Martin. “It’s something that’s always going to be there. It’s like it’s ruined you forever.

The more she spoke, she said, the more the problem grew. Some even told her that the way she dressed and posted the images on social media contributed to the harassment — essentially blaming her for the images rather than the creators.

Eventually, Martin turned his attention to legislation in Australia, advocating for a national law that would fine companies 555,000 Australian dollars ($370,706) for not complying with notices to remove such content from online safety regulators.

But it’s nearly impossible to govern the internet when countries have their own laws for content that are sometimes made halfway around the world. Martin, currently a lawyer and legal researcher at the University of Western Australia, says he believes the problem should be handled through some sort of global solution.

Meanwhile, some AI models say they are already blocking access to explicit images.

OpenAI says it removed explicit content from the data used to train the image generating tool DALL-E, which limits users’ ability to create those types of images. The company also filters requests and says it prevents users from creating AI images of celebrities and prominent politicians. Midjourney, another model, blocks the use of certain keywords and encourages users to flag problematic images to moderators.

Meanwhile, startup Stability AI released an update in November that removes the ability to create clear images using its image generator Stable Diffusion. The changes come after reports that some users were using the technology to create celebrity-inspired nude images.

Sustainability AI spokesman Motez Bishara said the filter uses a combination of keywords and other techniques like image recognition to detect nudity and returns a blurred image. But it is possible for users to manipulate the software and generate what they want as the company releases its code to the public. Bishara said that Stability AI’s license “extends to third-party applications built on Stable Propagation” and strictly prohibits “any misuse for illegal or unethical purposes.”

Some social media companies are also tightening their rules to better protect their platforms from harmful content.

TikTok said last month that all deepfakes or manipulated content showing real scenes must be labeled to indicate whether they are fake or altered in some way, and that private celebrities and youth Deepfakes are no longer allowed. Previously, the company prohibited sexually explicit content and deepfakes that mislead and harm viewers about real-world events.

Gaming platform Twitch also recently updated its policies around explicit deepfake images after a popular streamer named Atrioc was discovered to have opened a deepfake porn website on his browser during a livestream in late January. The site featured fake photos of fellow Twitch streamers.

The company wrote in a blog post that Twitch already prohibited explicit deepfakes, but now showing glimpses of such content — even if it’s intended to express displeasure — “will be removed and there will be an enforcement”. . And knowingly promoting, creating or sharing content is grounds for immediate ban.

Other companies have also tried to ban deepfakes from their platforms, but keeping them away requires diligence.

Apple and Google recently said that they have removed an app from their App Store that was playing obscene deepfake videos of actresses to market the product. Research into deepfake porn is not prevalent, but a report released in 2019 by AI firm DeepTrace Labs found that it was weaponized almost exclusively against women and that the most commonly targeted individuals were Western actresses, followed by South Korean K-stars. Was a pop singer.

The same apps removed by Google and Apple had run ads on Meta’s platforms, which include Facebook, Instagram and Messenger. Meta spokeswoman Dani Lever said in a statement that the company’s policy prohibits both AI-generated and non-AI adult content, and that it has prohibited the app’s Pages from advertising on its platform.

In February, Meta, as well as adults-only sites such as Fans Only and Pornhub, began participating in an online tool called Take It Down, which allows teens to report explicit images and videos from the Internet. The reporting site works for regular images and AI-generated content – which has become a growing concern for child protection groups.

“When people ask our senior leadership, what are the boulders coming down the hill that we’re concerned about? The first is end-to-end encryption and what that means for child safety. And then the second is AI and specifically is deepfake,” said Gavin Portnoy, a spokesman for the National Center for Missing and Exploited Children, which operates the Take It Down tool.

“We haven’t been able to give a straight answer to that yet,” Portnoy said.

read all Latest Tech News Here

(This story has not been edited by News18 staff and is published from a syndicated news agency feed)