Mistakes With Social Media Have Lessons for AI
Bruce Schneier and Nathan E. Sander challenge stewards of AI to avoid bringing harm to society; they see the solution as hinging on setting limits
By John P. Desmond, Editor, AI in Business
Writing recently in MIT Technology Review, authors Bruce Schneier and Nathan E. Sanders warn that social media has fundamental traits that have posed harm to society, and that “AI also has those attributes.”
Sanders is a data scientist affiliated with the Berkman Klein Center at Harvard University; Schneier is a security technologist and a lecturer at the Harvard Kennedy School.
Among the five attributes of social media the authors cite as harmful is “surveillance,” rooted in the personalization of advertising. “It’s hard to imagine how much spying is going on,” the authors state. They cited an analysis of Facebook by Consumer Reports that found every user has more than 2,200 companies viewing their web activities on behalf of Facebook.
“The possibility of manipulation is only going to get greater as we rely on AI for personal services,” the authors state, citing as examples personal digital assistants, which require “more intimacy” than other digital tools such as email, phone and cloud storage systems. “To most effectively work on your behalf, it will need to know everything about you,” the authors stated.
For those who choose not to use a digital assistant, AI tech wired into customer services systems will enable companies to “surreptitiously extract personal data by asking you mundane questions.” The authors conclude, “Exposure to this kind of inferential data harvesting may become unavoidable.”
The authors have suggestions for mitigating the risks of AI, starting with government regulation. “The biggest mistake we made with social media was leaving it as an unregulated space,” the authors stated.
Automakers Feeding Driving Data to Insurance Companies
A recent example of a new type of surveillance is that of auto manufacturers and auto insurance companies. A recent account in The New York Times described the experience of Kenn Dahl, who lives in the Seattle area and drives a Chevrolet Bolt. He was surprised in 2022 when his auto insurance went up by 21 percent; an agent told him his report from LexisNexis, a global data broker, was a factor.
So he asked for the report and was sent a 258-page disclosure document, which the insurance company is required to provide per provisions of the Fair Credit Reporting Act. The report had 130 pages detailing each time he or his wife had driven the Bolt in the past six months, including the dates of 640 trips, start and end time, distance traveled and an account of any speeding, hard braking or sharp acceleration. It did not include where they drove the car.
The trip information was provided by General Motors, maker of the Bolt. LexisNexis analyzed the data to create a risk score “for insurers,” a LexisNexis spokesman said. Dahl stated, “It felt like a betrayal. They’re taking information that I didn’t realize was going to be shared and screwing with our insurance.”
Automakers, data brokers and insurers will say they have permission from the driver to collect the information, “But the existence of these partnerships is nearly invisible to drivers, whose consent is obtained in fine print and murky privacy policies that few read,” the article’s author stated.
The collection of driving data from consumers’ cars is getting the attention of lawmakers. Senator Edward Markey of Massachusetts, for example, last month urged the Federal Trade Commission to investigate it.
“The ‘internet of things’ is really intruding into the lives of all Americans,” Senator Markey stated to the Times. “If there is now a collusion between automakers and insurance companies using data collected from an unknowing car owner that then raises their insurance rates, that’s, from my perspective, a potential per se violation of Section 5 of the Federal Trade Commission Act.”
Driver Has No Idea How He Signed Up for OnStar Smart Driver
A few days after the Times account was published, a driver in Florida filed suit against General Motors and LexisNexis Risk Solutions, accusing the companies of violation of privacy and consumer protections laws, according to a followup account in The New York Times. He was sent a report with details on 258 trips he took in the previous six months. His insurance rates recently doubled; he tried to find out why.
In the complaint, driver Romeo Chicco stated that he called GM and LexisNexis multiple times to ask why his data had been collected without his consent. He was told eventually that his data had been sent via OnStar. He was told he had signed up for OnStar’s Smart Driver program; Chicco said he had not signed up for the program, but that he had downloaded the MyCadillac app from General Motors.
“What no one can tell me is how I enrolled in it,” Chicco stated to the Times in an interview. “You can tell me how many times I hard-accelerated on Jan. 30 between 6 a.m. and 8 a.m., but you can’t tell me how I enrolled in this?”
The term “surveillance capitalism” was introduced into the lexicon by Soshana Zuboff, who published “The Age of Surveillance Capitalism” in 2019. The book is an account of how Google, Facebook and other big tech companies essentially seized property in the form of data from individuals who were largely unaware permission was being granted. That data is the foundation value of the advertising revenue they have built their empires on.
An excerpt from the book: “Surveillance capitalism’s products and services are not the objects of a value exchange. They do not establish constructive producer-consumer reciprocities. Instead, they are the “hooks” that lure users into their extractive operations in which our personal experiences are scraped and packaged as the means to others’ end.”
One observer advises not to give your data to AI. Writing recently in Dark Reading, Kit Merker, CEO of Plainsight Technologies, stated, “Every AI company seems to have an insatiable appetite for the world's precious data. These companies are eager to train their proprietary AI models using any available images and videos, employing tactics that sometimes involve inconveniencing users, like CAPTCHAs making you identify traffic lights. Unfortunately, this clandestine approach has become the standard playbook for many AI providers, enticing customers to unwittingly surrender their data and intellectual contributions, only to be monetized by these companies.”
Giving up your data to AI systems comes with inherent risk and needs to be addressed with great transparency. “AI companies should be obligated to clearly outline how they intend to use your data and for what specific purposes,” stated Merker, whose company offers vision intelligence systems.
AI companies need to get permission to use business or personal data. “Permission should be nonnegotiable. AI companies must seek explicit consent from businesses before utilizing their data. This not only upholds ethical standards but also establishes a foundation of trust between companies and AI providers.”
It seems reasonable, and maybe idealistic in the age of surveillance capitalism.
Read the source articles and information in MIT Technology Review, in The New York Times and in Dark Reading.