Bloomberg Has a Rocky Start With A.I. Summaries

Bloomberg Has a Rocky Start With A.I. Summaries


Bloomberg, the financial news powerhouse, has been experimenting with using artificial intelligence to help produce its journalism.

It hasn’t always gone smoothly.

The news outlet has had to correct at least three dozen A.I.-generated summaries of articles published this year. One happened on Wednesday, when Bloomberg broke news about President Trump’s auto tariffs.

The article correctly reported that Mr. Trump would announce the tariffs as soon as that day. But the bullet-point summary of the article written by A.I. inaccurately said when a broader tariff action would take place.

Bloomberg is not alone in trying A.I. — many news outlets are figuring out how best to embrace the new technology and use it in their reporting and editing. The newspaper chain Gannett uses similar A.I.-generated summaries on its articles, and The Washington Post has a tool called “Ask the Post” that generates answers to questions from published Post articles.

And problems have popped up elsewhere. Earlier this month, The Los Angeles Times removed its A.I. tool from an opinion article after the technology described the Ku Klux Klan as something other than a racist organization.

Bloomberg News said in a statement that it publishes thousands of articles each day, and “currently 99 percent of A.I. summaries meet our editorial standards.”

“We’re transparent when stories are updated or corrected, and when A.I. has been used,” a spokeswoman said. “Journalists have full control over whether a summary appears — both before and after publication — and can remove any that don’t meet our standards.”

The A.I. summaries are “meant to complement our journalism, not replace it,” the statement added.

Bloomberg announced on Jan. 15 that it would roll out the A.I.-generated summaries on top of news articles. The summaries consist of three bullet points condensing the main points of the article.

John Micklethwait, Bloomberg’s editor in chief, laid out the thinking about the A.I. summaries in a Jan. 10 essay, which was an excerpt from a lecture he had given at City St. George’s, University of London.

“Customers like it — they can quickly see what any story is about. Journalists are more suspicious,” he wrote. “Reporters worry that people will just read the summary rather than their story.”

But, he acknowledged, “an A.I. summary is only as good as the story it is based on. And getting the stories is where the humans still matter.”

One summary was removed from a March 6 article because it inaccurately stated that Mr. Trump had imposed tariffs on Canadian goods last year, instead of this year. Another, on a March 18 article about managers of sustainable funds, “failed to distinguish between actively and passively managed funds, providing incorrect figures as a result,” according to a correction.

Other errors have included incorrect figures, incorrect attribution and references to the wrong U.S. presidential election.

For Wednesday’s scoop on Mr. Trump’s tariff announcement, a correction was soon appended that noted that the summary had been removed because it had “misstated when the broader tariff action was to take place.” An updated version of the correction attributed it to lacking “attribution on tariff timing.”

The Bloomberg spokeswoman said feedback had been positive to the summaries in general “and we continue to refine the experience.”



Source link

https://nws1.qrex.fun

Leave a Comment

Your email address will not be published. Required fields are marked *

*
*