Google is exploring AI tools for journalists in conversations with publishers

Google is exploring AI tools for journalists in conversations with publishers

These AI tools can assist journalists with choices of headlines or different writing styles.

A spokeswoman for the company said late Wednesday that Google is exploring using artificial intelligence tools to write news articles and is in talks with news organizations to use the tools to assist journalists.

The spokeswoman did not name the publishers, but the New York Times reported that Google has held discussions with The Washington Post, Wall Street Journal owner News Corp, and even the New York Times, among others.

For example, these AI tools could assist journalists with the choice of headlines or different writing styles, which “enhances their work and productivity”, a Google spokesperson said, adding that it was in the “early stage of idea discovery”.

“Simply put, these tools are not intended to, and cannot, replace the essential role journalists play in reporting, producing and fact-checking their articles,” the spokesperson said.

However, some executives who saw Google’s pitch called it untenable, the NYT said, adding the executives asked not to be identified. The AI ​​tool that was introduced is called Genesis internally at Google, the NYT reported, citing people familiar with the matter.

A News Corp spokesperson declined to comment on the NYT report or the AI ​​tool, but said, “We have an excellent relationship with Google, and we appreciate (Google CEO) Sundar Pichai’s long-standing commitment to journalism.”

The NYT and The Washington Post did not immediately respond to Reuters requests for comment outside regular working hours.

The news comes days after The Associated Press said it would partner with ChatGPT-owner OpenAI to explore the use of generative AI in news, a deal that could set a precedent for similar partnerships between industries. Could

Some outlets are already using generic AI for their content, but news publications have been slow to adopt the technology due to concerns about its tendency to generate factually incorrect information, as well as challenges in distinguishing between content produced by humans and computer programs.

(Except for the headline, this story has not been edited by NDTV Staff and is published from a syndicated feed.)