< IMG SRC = "/Uploads/Blogs/E2/91/IB-FQV6DVRQQ_5268637d.jpg" Alt = "Shi-Searchmen got into a scandal due to fake news sources"/> ~ ~ < p > eight search services based on generative artificial intelligence demonstrated problems in working with news sources, according to the study of the center of digital & nbsp; journalism & nbsp; in Columbia Journalism Review. AIs in more than 60% of cases give false answers to queries about news sources. More than 25% of Americans are already using AI as an alternative to traditional search engines, which is concerned about the reliability of such platforms.
< p > during testing AI analyzed real news, defining headings, sources, publishers, publications and URL. All models often provided plausible but incorrect answers. Paid models such as Perplexity Pro and Grok 3, gave errors more often than free versions.
< P >< IMG SRC = "/Uploads/Wysiwyg/%D0%90%D1%80%D1%82%D0%B5%D0%BC/07022025/Screenshot_30-3.jpg" Alt = " width = "552" height = "314" />< /p > < P > The study also found that some SI platforms ignore directives that prohibit scanning of certain web resources. For example, Perplexity incorrectly identified National Geographic materials even when she was forbidden.
< P > These problems put publishers in a difficult situation: scanners blocking the situation, and the opening of resources can lead to loss of visitors. The Chief Operational Director of Time expressed hope for improving the quality of the SI systems in the future. Openai and Microsoft confirmed that their systems can make such errors but promised to work on corrections.