‘AI Fatigue’ Sets In, and Copyright Infringement Cases Roll On
Rapid growth of AI resulting in ‘constant barrage' of information, pitches and promises, contributing to growing skepticism of AI by American workers
By John P. Desmond, Editor, AI in Business

For many people, AI fatigue has set in.
Even Google search, always reliable, is changing. Ed tech company Chegg has filed a suit saying the ‘AI previews’ offered by Google, have eroded its business, siphoning off traffic that previously went directly to the Chegg site.
“The fervor surrounding artificial intelligence (AI) is unprecedented. Many people are beginning to feel overloaded,” stated author Neil Sahota, an IBM Master inventor, an advisor to the United Nations on AI and author of the “Own the AI Revolution” best-selling book, in a recent post on his website. Many of his clients are in government.
He defines AI fatigue as “the mental and emotional exhaustion individuals and organizations experience due to the constant barrage of information, sales pitches, and lofty AI technology promises.” The fatigue is compounded by the rapid growth of AI, constant interaction with the technologies and increased pressure to do more with less in general.
The gap between the high expectations for AI and the actual outcomes he has seen working with some government clients, can lead to doubt and burnout among government officials and employees.
“The need to solve more pressing, real-world problems with the current government IT systems and the desire to embrace the newest AI technology are also at odds,” he stated.
He sees the main causes of AI Fatigue to be:
Constant technological change;
Unrealistic expectations;
Lack of understanding.
In a domain such as healthcare. “Where accuracy matters,” sometimes unreliable results from an AI system must be validated. “Relying too heavily on AI can weaken critical thinking and decision-making abilities, causing individuals to feel as though they are just extensions of the technology they interact with,” Sahota stated.
American workers are skeptical of AI. A survey conducted in October by Pew Research Center of 5,300 Americans in full- or part-time jobs found that fewer than one-third were “excited” about the use of AI at work and only six percent believed that AI will lead to more opportunities for them at work.
Only 16 percent of these workers were using AI at least some of the time at work. A minority percentage of those who use AI chatbots at work did find they were “extremely helpful in letting them work more quickly or at a higher quality,” according to an account in The Washington Post.
Trust Trade-off
Whether to trust the results of online search is coming more into question as the new AI methods are tried.
One observer sees that the introduction of AI into search has caused marketing professionals to “rethink the entire playbook” since ChatGPT was released to the market in November 2022.
Now, “Instead of entering keywords and clicking through multiple articles, you get neatly summarized information—immediate, easily accessible, and right at your fingertips,” stated Iva Vlasimsky, founder of Narativa Communications, based in Croatia, via a post on LinkedIn.
And now users see “AI Overviews” at the top of the Google search results page, providing a summary and degree of abstraction from the source material. She cited research from Gartner predicting the volume of “traditional” searches will drop by 25% by 2026, with much of the market shifting to AI chatbots and virtual assistants.
How accurate are the results of this different search animal? Vlasimsky reported on a month-long study conducted by BBC researchers to give ChatGPT and Gemini from Google, Microsoft Copilot and Perplexity AI, access to the BBC website. They were asked questions about the news, with instructions to use BBC News articles as sources.
The results were “alarming,” she reported. They showed 51% of the AI answers had ‘significant errors of some kind’ and 91% of responses contained ‘some real issues.’
“The most common problems were factual inaccuracies, false source citations, and a lack of context—everything a reliable research tool should provide,” Vlasimsky said.
She cited another study, by Dr. Michael Gerlich from SBS Swiss Business School of Zurich, showing that people who use AI tools are losing a degree of their capacity for critical thinking. The study surveyed over 600 participants of different ages and education backgrounds.
The researchers found that AI tools can enhance learning through personalized teaching and immediate feedback, but over-reliance on AI tools can lead to AI doing the thinking instead of the student working through the problem. “This is particularly problematic for developing critical thinking, which requires active engagement and analysis of information,” she stated.
So AI is turning us into dummies, great. Is it also ripping off copyright holders?
Whether large-language models are illegally capturing copyrighted content continues to be a question in the courts.
A federal judge recently ruled that AI training on copyrighted content was an infringement, and he ruled the Fair Use Defense did not apply.
According to an account in PYMNTS, Judge Stephanos Bibas ruled that Ross Intelligence violated copyright law by using Thomson Reuters’ copyrighted headnotes to train its AI-powered legal research tool. The judge ruled the company’s use of the material did not qualify as “fair use” under copyright law.
The case is a departure from a 2023 decision by Judge Bibas that mostly denied motions from Thomson Reuters; the case now continues to a trial stage.
A bulletin from the Taft law firm stated that the judge’s decision was a departure from previous copying cases against Google and Sega, and while Taft expects the defendant company to appeal, “This opinion is one of the first to evaluate the use of copyrighted content to train and build AI tools. Other courts are likely to look at its reasoning.”
‘Eroding the internet’
One company sees that Google AI Overviews are “eroding the internet” and filed suit as a result. Chegg. Inc., offering online education services to students, filed suit recently in Washington, DC, alleging that Google is co-opting publishers’ content to keep users on its own site, according to a recent account from Reuters.
The Santa Clara, California-based company stated that Google’s AI overviews have caused a drop in visitors and subscribers, so much so that the company is considering a sale or take-private transaction, Chegg CEO Nathan Schultz stated. Google spokesperson Jose Castaneda defended the company, stating, “With AI Overviews, people find Search more helpful and use it more, creating new opportunities for content to be discovered.”
CEO Schultz sees that Google is profiting off Chegg’s content without paying for it. "Our lawsuit is about more than Chegg,” he said. “It's about the digital publishing industry, the future of internet search, and about students losing access to quality, step-by-step learning in favor of low-quality, unverified AI summaries."
Google is also being challenged by the newspaper industry. The News/Media Alliance, a nonprofit representing more than 2,200 news, magazine and digital media organizations, filed a copyright and trademark infringement case in the Southern District of New York against Cohere, Inc. in February.
Danielle Coffey, president of the News/Media Alliance, stated in an opinion column published recently in the Fort Worth Star Telegram (via AOL), “We allege that Cohere Inc. is using AI to commit massive, systematic copyright and trademark infringement.”
The suit described Cohere’s copying practices to be brazen. “There’s nothing subtle about Cohere’s many acts of what we believe to be theft. The suit details verbatim lifting of articles, even the posting of a story within an hour of publication,” Coffey stated.
She noted the industry has tried multiple ways to protect its digital assets. “Nonetheless, the overwhelming benefit has gone to AI companies, and we in the media business have been left with a fraction of the value that our own content has created,” she stated.
She compared the case to that of The New York Times against Microsoft and OpenAI, filed in December of 2023 and now making its way through the courts, for copyright infringement in training for its generative AI products. She also compared it to the suit filed by NewsCorp against Perplexity.ai in October 2024 for copyright infringement. The suit alleges Perplexity used copyrighted works from The Wall Street Journal to populate its retrieval-augmented generation database, and in that process diverted revenue from the publisher’s website.
In its response, Perplexity stated, “We believe that tools like Perplexity provide a fundamentally transformative way for people to learn facts about the world,” and said it would defend its business model.
Coffey, in her opinion column, stated, “Artificial intelligence will help us better serve our customers, but only if it respects intellectual property. That’s the remedy we’re seeking in court.”
Read the source articles and information in a recent post on Neil Sahota’s website, in The Washington Post, on Iva Vlasimsky’s LinkedIn page, in PYMNTS, from Reuters and from the Fort Worth Star Telegram (via AOL).