February 17, 2012 - TheThe New York TimesThe decision to apply AI to its products andcompilerteam, and that the internal tools may be used in the future to write social media copy, SEO headlines, and some of the code.
According to Semafor's report today, the company's email notification says it will offer AI training to editorial staff and launch a new product called "Echo" of new tools. The company also shared editorial guidelines for using AI and offered several AI products for employees to use to develop website products and ideas.
"Generative AI helps journalistsDigging for the truthand help more peopleUnderstanding the world.. Machine learning has already helped us cover stories that would otherwise be difficult to cover, and generative AI promises to furtherEnhancing our journalistic skills," the New York Times editorial guide reads.
"Similarly, with the help ofDigital Voice Articles, Cross-Language Translation, and the Future of Unexplored Generative AI Applications, the New York Times will become even more accessible. We see this technology as a powerful tool that, like many other technological advances, will fuel our mission, not some magical solution."
The New York Times says it has approved a range of AI tools for use by editorial and product teams, including GitHub's Programming Assistant, Google's Vertex AI, NotebookLM, the New York Times' ChatExplorer, a number of Amazon AI offerings, as well as the use of OpenAI's non-ChatGPT API through the New York Times business account ( requires approval from the company's legal department). Additionally, the company announced the launch of "Echo," an internal testing tool forSuccinctly summarize New York Times articles, briefings and interactive content.
The paper encourages editorial staff to use these AI tools toGenerate SEO headlines, summaries and audience outreach content, suggesting changes and questions and ideas, and asking questions about reporters' documents, participating in research, and analyzing the New York Times' own content and images.
In a series of training documents, the editorial guide lists possible scenarios of use for journalists, for example:
- "How many times does Al appear in these New York Times stories?"
- "Can you revise this text to make it more concise?"
- "Suppose you were to post this article on Facebook, how would you promote it?"
- "Write a short summary of this article in concise, conversational language appropriate for a newsletter."
- "Can you suggest five search-optimized titles for this article?"
- "Can you briefly summarize this play by Shakespeare?"
- "Can you summarize this government report in layman's terms?"
However, the company still sets limits on its use of AI, emphasizing the potential risks of copyright and leaking sources.
The company informed the editorial staff thatAI should not be used to draft or significantly modify articles, and no third-party copyrighted material should be imported(especially confidential source information), cannot utilize AI Bypassing the paywallI can't post it.Machine-generated images or videos, unless it is used to demonstrate the technology and is properly labeled. Unapproved AI tools, if used improperly, could cause The New York Times to lose its right to protect sources and notes, the company said.
According to 1AI, the New York Times the company is still in the middle of a legal battle with OpenAI, alleging that the latter's unauthorized use of New York Times content for model training constitutes serious copyright infringement. And OpenAI's largest investor, Microsoft, said the New York Times move is suppressing technological innovation.