
Discover more from AI in Business
Trust In AI Now Found to be the CEO's Domain
Respondents to an IBM study of 1,200 executives say top business leaders are primarily responsible for AI ethics, and delivering on trustworthy AI, which is seen as a differentiator.
By John P. Desmond, Editor, AI in Business

Where does trust live in the organization?
Who is responsible for ensuring AI systems fielded by a company are trustworthy?
New research is suggesting such a responsibility can only live at the top - in the C suite.
When asked which function is primarily responsible for AI ethics, 80 percent of respondents to a recent global study conducted by the IBM Institute for Business Value (IBV) pointed to a non-technical executive, such as a CEO, as being primarily responsible for AI ethics.
In a similar survey conducted in 2015, only 15 percent of respondents pointed to top executives as being responsible for AI ethics, according to an account of the study from insideBigData.
IBM founded the IBV in 2002 to work with industry professionals, leading clients, academics and a range of consultants to perform data-driven research to gain insights into emerging trends. The IBV is part of IBM Consulting.
Entitled “AI ethics in action: An enterprise guide to progressing trustworthy AI,” the study was based on a survey of 1,200 executives in 22 countries across 22 industries. The study was conducted in cooperation with Oxford Economics in 2021.
In other highlights, the survey found:
CEOs were seen by 28 percent of respondents as being most accountable for AI ethics, scoring higher than any other management role;
Organizations are implementing AI ethics mechanisms, because trustworthy AI is perceived as a strategic differentiator;
More than 75 percent of respondents agreed AI ethics was important, up from 50 percent in 2018;
More than 45 percent reported that their organizations have created AI-specific ethics mechanisms, including a project risk assessment framework, and an auditing/review process;
Progress is slow; less than 25 percent of organizations have operationalized AI ethics, and fewer than 20 percent agreed that their organization's practices match their stated principles;
Diversity is elusive for AI teams; some 68 percent responded that having a diverse workplace is important to mitigating bias in AI, but AI teams are still substantially less diverse than their organizations, based on sex, race and gender identity.
Given that 90 percent of CEOs are male, it’s good they are tuning into AI ethics. However, achieving AI ethics is hard. Computer scientists need to think about bias statistically. The difference between two groups might be predictable, but each group’s definition might be at odds. Lots of tradeoffs are in play. Moreover, the Definition Enforcement Bureau (made up) has not defined bias; it does not have one definition, and neither does the word fairness.
“We’re currently in a crisis period, where we lack the ethical capacity to solve this problem,” stated John Basl, associate professor of philosophy, Northeastern University, who specializes in emerging technologies, in a recent account from Vox.
The conundrum underlies how companies handle issues around lending algorithms, facial recognition and other AI technologies. The US has varying policies or, in many cases, no policies around these issues. The pharmaceutical industry is regulated, but not Google or Facebook.
Acknowledging this, AI ethics researcher Timnit Gebru, who was let go from her ethics oversight role at Google and subsequently founded the Distributed AI Research Institute, stated that for the regulated industries, “Before you go to market, you have to prove to us that you don’t do X, Y, Z. There’s no such thing for these [tech] companies. So they can just put it out there.”
In the example of FICO scores applied for credit worthiness, historically disadvantaged groups will fare worse. If an AI algorithm were to adjust for that, it would assign different scores to different groups, in a form of reparation for past wrongs. That quickly leads to a political discussion, beyond the usual realm of technologists.
Julia Stoyanovich, director of the NYU Center for Responsible AI, agreed with the idea of having different FICO score cutoffs for different racial groups, because “the inequity leading up to the point of competition will drive [their] performance at the point of competition.” However,it quickly gets sticky. The approach requires that data be collected on an applicant’s race, which is a legally protected characteristic.
Algorithms inevitably embed value judgements about what to prioritize. “It is inevitable that values are encoded into algorithms,” stated Arvind Narayanan, a computer scientist at Princeton, to Vox. “Right now, technologists and business leaders are making those decisions without much accountability.”
The path is likely leading to more regulation of AI development. “We need more regulation,” Stoyanovich stated. “Very little exists.”
US Congress to the rescue. Sen. Ron Wyden (D-OR), has cosponsored the Algorithmic Accountability Act of 2022, which if passed, would require companies to conduct assessments for bias. It would stop short of requiring companies to build in fairness in a specific way. Stoyanovich commented that, “We also need much more specific pieces of regulation that tell us how to operationalize some of these guiding principles in very concrete, specific domains.”
She worked as a consultant to New York City on a law passed to regulate the use of automated hiring systems, which help evaluate applications and make recommendations. The law stipulates that employers can only use such AI systems after they have been audited for bias, and that jobseekers should get explanations for the factors used in the AI’s recommendation.
As it so often is in the domains of privacy and big tech regulation, the European Union in March finalized rules in its Digital Markets Act that would prevent, for example, a user’s personal data from being combined for ad targeting, unless the user gives “explicit consent.” (See AI in Business, The EU’s Digital Markets Act Sets the Pace for Big Tech Regulation.)
That would prevent Google from collecting information on an individual’s YouTube viewing, online searches, travel history from Maps and Gmail conversations - all now used to build profiles used to serve up personalized ads - unless a user agrees to each use.
Also, online services would have to ensure that users can opt out as easily as they are able to sign up. “The gatekeepers will now have to take responsibility," stated Margrethe Vestager, executive vice president at the European Commission
Read the source articles and information from insideBigData, Vox and AI in Business.
(Write to the editor here.)