
Discover more from AI in Business
What Can You Do When AI Lies About You?
Invasion of the ‘Frankenpeople’ in mashups is underway; guard against generative AI fictions with content monitoring, prompt engineering; using Bing to fact-check ChatGPT
By John P. Desmond, Editor, AI in Business

When AI lies about you, your options are limited.
It happened to Marietje Schaake, a Dutch politician who served for 10 years in the European Parliament and is now an international policy director at Stanford University's Cyber Policy Center. A colleague was using BlenderBot 3, a research project by Meta, when in response to a question, the agent characterized Schaake as a terrorist.
“I’ve never done anything remotely illegal, never used violence to advocate for any of my political ideas, never been in places where that’s happened,” stated Schaake in a recent account in The New York Times “First, I was like, this is bizarre and crazy, but then I started thinking about how other people with much less agency to prove who they actually are could get stuck in pretty dire situations.”
Updates to BlenderBot seemed to fix the issue for Schaake, who did not consider suing Meta, being one who disdains lawsuits and having no idea where to start. Meta closed the BlenderBot project in June; the company launched its LLaMA large language model in February. Meta responded to the Times, saying that its research model combined two unrelated pieces of information into an incorrect sentence about Schaake.
Others who feel wronged are not so averse to taking legal action, although legal precedents are few to non-existent and what laws are in place are fairly new. The Times assembled a summary of recent cases: an aerospace engineer filed a defamation case against Microsoft this summer, saying the Bing chatbot combined his biography with that of a convicted terrorist with a similar name; Microsoft declined comment.
In June, a radio host sued OpenAI saying ChatGPT invented a lawsuit against him while he was an executive at a company where he never worked. OpenAI declined to comment on specific cases. The company did state in a court filing asking for the suit to be dismissed that “there is near universal consensus that responsible use of A.I. includes fact-checking prompted outputs before using or sharing them,” the Times reported.
When AI hallucinates fake biographical details and mash up identities, some researchers refer to the creations as “Frankenpeople,” and surmise that the mashups can result from scarce information about a specific person being available online.
AI Incidents Up Since Release of ChatGPT
Since the release of ChatGPT in November, events involving AI are up. The AI Incident Database has logged more than 550 entries this year, including deepfake videos and images, the Times reported. Scott Cambo, a data scientist who oversees the Incident Database, expects “a huge increase of cases” involving the mischaracterizations of real people in the near future.
“Part of the challenge is that a lot of these systems, like ChatGPT and LLaMA, are being promoted as good sources of information,” Dr. Cambo stated. “But the underlying technology was not designed to be that.”
Nationally recognized legal scholar Jonathan Turley was maligned by ChatGPT this spring, when a fellow law professor asked ChatGPT to generate a list of legal scholars who had sexually harassed someone. Turley’s name was on the list, according to an account in The Washington Post.
The ChatGPT output stated that an incident happened on a class trip to Alaska, and it cited a March 2018 article in The Washington Post as a source. But, no such article existed and there had never been a class trip to Alaska. Experienced as a regular commentator in the media, this time Turley had no journalist or editor to call, no known way to correct the record.
“It was quite chilling,” he stated to The Post. “An allegation of this kind is incredibly harmful.”
Confidently Making Up Facts
The confidence with which ChatGPT answers belies its ability to make up facts.
“Because these systems respond so confidently, it’s very seductive to assume they can do everything, and it’s very difficult to tell the difference between facts and falsehoods,” stated Kate Crawford, a professor at the University of Southern California’s Annenberg School, and a senior principal researcher at Microsoft Research.
OpenAI spokesperson Niko Felix said in a statement in response to a query from The Post, “When users sign up for ChatGPT, we strive to be as transparent as possible so that it may not always generate accurate answers. Improving factual accuracy is a significant focus for us, and we are making progress.” The Post article was published in April.
However, “It’s that very specific combination of facts and falsehoods that makes these systems, I think, quite perilous if you’re trying to use them as fact generators,” stated Professor Crawford.
As to who is responsible when an AI chatbot generates false information, the jury is out. “We just don’t know,” stated Jeff Kosseff, a professor at the Naval Academy and an expert on online speech, to The Post. “We’ve not had anything like this before.”
The Federal Trade Commission in mid-July opened an investigation into OpenAI’s ChatGPT and the potential risk it poses to consumers. (See AI Regulation Heats Up in the US and the EU, in AI in Business, July 21 issue.) The FTC is probing whether OpenAI “engaged in unfair and deceptive privacy or data security practices,” or engaged in practices “relating to risks of harm to consumers, including reputational harm.”
Examining Content Moderation, Fact-Checking Tools
Now, to play defense against ChatGPT, users need to know about content moderation. OpenAPI released a Moderation API before the release of ChatGPT in November. Writing recently in Towards Data Science, data scientist Idil Ismiguzel stated, “Any system that leverages large language models and relies on user-generated or AI-generated content, should perform content moderation and automate the process of identifying and filtering out inappropriate or offensive content.”
The tool returns scores according to whether the content flags hate, harassment, self-harm sexual issues or violence. The tool does not appear to verify facts. Google has an answer, GPTCheck. Highlight a sentence, and the tool searches Google with the highlighted sentence, and the tool produces a similarity score, for what that is worth.
Another suggestion, from How-To Geek, is to check the ChatGPT output with Bing AI from Microsoft, ironically. Written by technology writer Sydney Butler, the account states, “ChatGPT will often make up facts.” And if you ask it to include citations, it can make those up too.
“BingAI chat, on the other hand, is directly connected to the internet and can find any information on the web. That said, Bing AI has strict guardrails put in place that ChatGPT doesn’t have, and you are limited in the number of interactions you can have before wiping the slate clear and starting again.”
After running a check, Bing will return the entry with citations, real ones.
Now we will see a market space of tools, an adjacent market, aimed at establishing some degree of credibility for the output of generative AI.
Read the source articles and information in The New York Times, the AI Incident Database, The Washington Post, in Towards Data Science and from How-To Geek.
(Write to the editor here; tell him what you want to read about in AI in Business.)