Discover more from AI in Business
Social Contracts Seen as Next Step in Responsible AI
More of a collaboration with stakeholders, and not just the good PR gloss of guidelines, is the aim of recent social contract efforts to ensure AI systems are safe and fair.
By John P. Desmond, Editor, AI in Business
Social contracts between those developing AI systems and customers, investors, employees and citizens, are needed to get beyond the commitments expressed in Responsible AI guidelines.
“Companies have to earn a social license to use AI before they deploy the technology, and they have to ensure that they retain the social license over time,” stated François Candelon, lead author of a recent account in Fortune on the subject. Candelon is a senior partner at Boston Consulting Group, focused on the geopolitical and the economic impact of the Digital Revolution.
The idea of a social contract for AI is aimed at fostering trust in AI, which is sorely needed. In the US, “People don’t trust companies and governments to collect, store, and analyze personal data, especially about their health and movements,” Candelon stated.
The idea of a social license originated fairly recently in the mining industry. In the book, The Social License: How to Keep Your Organization Legitimate, by Leeora Black and John Morrison, a social license is defined as “the negotiation of equitable impacts and benefits in relation to its stakeholders over the near and longer term. It can range from the informal, such as an implicit contract, to the formal, like a community benefit agreement.”
Companies earn a social license through consistent and trustworthy behavior. “Thus, a social license for AI will be a socially constructed perception that a company has secured the right to use the technology for specific purposes in the markets in which it operates,” Caldelon stated.
The author described three pillars of the social license of a company using AI systems: fairness and transparency, being able to explain how the algorithms work and that they are fair; benefit, in which companies demonstrate that the advantage of using AI is greater than the costs, such as in security and privacy tradeoffs; and trust and accountability, for example with how data is collected and with the algorithm's decisions.
Making the case, the authors state, “Companies are bound to need a legal license from government; an economic license from investors; and, above all, a social license from stakeholders if they wish to use AI at scale.”
Fujitsu Forms AI Ethics and Governance Office
Evidence of the growing seriousness of companies pursuing ethical AI, is the recent announcement by Fujitsu that it has established an AI Ethics and Governance Office to oversee the “safe and secure deployment” of AI and machine learning technologies.
The office will be led by Juishi Arahori, former head of Fujitsu’s Digital Technology Promotion Legal Office, according to an account in ITPro.
Commenting on Fujitsu’s news, the AI Ethics Lead at Deloitte, Michelle Seng Ah Lee, stated that it was encouraging to see Fujitsu take steps to enhance the governance of AI systems “‘beyond agreement on the principles.”
“It represents an increasing consensus in industry that AI systems may pose new risks and ethical considerations to businesses and to our society, which require robust governance and monitoring to be in place,” she stated.
Peter van der Putten, an associate professor of AI at Leiden University, sees the news from Fujitsu as representing a shift in attitudes on ethical AI.
“It is very much in line with how 2022 will be the year we see ethical AI move beyond ‘fluffy policy’ and become embedded in tangible tools and actual law and regulations," he stated to IT Pro.
"It’s been a fashionable trope in recent years for both businesses and governments to talk about using AI in a way that is both ethical and responsible,” he stated. “Organizations have become well-versed and even better practiced at telling us how important this is, yet, actual steps to ensure their impact have been much rarer."
Pew Research Center Study Results Not So Optimistic About Ethical AI
Recent research from the Pew Research Center is not so optimistic about the future of ethical AI. Working with Elon University’s Imagining the Internet Center, the researchers asked over 600 technology innovators, developers, business and policy leaders the following question: By 2020, will most of the AI systems being used by organizations of all sorts employ ethical principles focused primarily on the public good?
The answer: No, answered 68 percent of the respondents, according to an account from Pew Research Center.
Written elaborations of the responses fleshed out the reasoning, into categories of “worries” and “hopes.” In the worries category, the main developers and deployers of AI are seen as primarily seeking profits, and no consensus exists on what ethical AI looks like.
In the hopes category, progress is being seen as AI spreads and shows its value.
Challenging issues include “striking a balance between screen time and real-world time;” and “parsing, analyzing and improving the use of tracking data to ensure individual liberty.”
”Our ethical book is half-written,” stated respondent Barry Chudakov, founder and principal of Sertain Research, market researchers that employ emotion recognition technologies. “Many of our ethical frameworks have been built on dogmatic injunctions: Thou shalt and shalt not … big data effectively reimagines ethical discourse …for AI to be used in ethical ways, and to avoid questionable approaches, we must begin by reimagining ethics itself.”
(Email the Editor here.)
Thanks for reading AI in Business! Subscribe for free to receive new posts and support my work.