Trust in AI is Elusive as Deployment March Continues
Skepticism about AI ensnaring companies offering AI products and services, putting brands at risk; Data & Trust Alliance issues a governance roadmap
By John P. Desmond, Editor, AI in Business
Trust in AI is a function of how well the data feeding the AI models can be trusted, and how much the customers of companies incorporating AI trust the brands offering the services.
One thing is clear: businesses are moving headlong into the implementation of AI systems, many feeling pressure to do so, despite any doubts or fears they have about the technology.
Responses to an annual survey by Monte Carlo, a company focused on the validity of data, show that nearly 100 percent of data teams are actively pursuing AI applications, while 68 percent are not completely confident in the data that powers the systems.
Conducted by Wakefield Research for Monte Carlo, the 2024 State of Reliable AI Survey polled 200 data leaders and professionals in the spring of this year. Among the findings:
100 percent of data professionals feel pressure from their leadership to implement a generative AI strategy and/or to build GenAI products;
91 percent of data leaders have built or are currently building a GenAI product;
82 percent rated the potential usefulness of GenAI at least 8 on a 1-10 scale, but 90 percent believe their executives do not have realistic expectations about the new tech, suggesting “a troubling disconnect” between data teams and business stakeholders, the report authors stated.
Organizations have been dealing with exponentially greater volumes of data than in decades past, even before the dawn of GenAI, the report noted.
“Data is the lifeblood of all AI – without secure, compliant, and reliable data, enterprise AI initiatives will fail before they get off the ground,” stated Lior Solomon, VP of Data for Drata. “Data quality is a critical but often overlooked component of ensuring ethical and accurate models.” Drata offers a compliance management platform.
The survey respondents were approaching AI projects with four evenly distributed data strategies: building their own large language model; using models-as-a-service products such as OpenAI or Anthropic; implementing a retrieval-augmented generation (RAG) architecture; or fine-tuning models-as-a-service or their own LLMs.
Survey Finds Most Americans Don’t Trust AI
Some 80 percent of Americans do not trust businesses to use AI responsibly, according to a survey conducted last year by Bentley University with the polling company Gallup.
While businesses of all sizes are rushing to implement AI, consumers do not see enough responsibility guardrails in place, according to an account of the survey on tech.co. The majority, 79 percent, reported trusting businesses “not much” or “not at all” to adopt AI responsibly, the survey found. Only one in 10 Americans who responded believed that AI did more good than harm. The remaining 90 percent were split between AI doing an equal amount of harm and good, 50 percent, and more harm than good, 40 percent.
The Bentley-Gallup Business in Society Report was based on a web survey with 5,458 US adults carried out in May 2023, using the Gallup Panel. The respondents were asked about their feelings on AI, both in the workplace and as consumers.
Similar results were found by the 2024 Edelman Trust Barometer annual report, based on a survey of 32,000 respondents in 28 countries on trust amid rapid innovation. Some 30 percent of respondents embraced AI and 35 percent rejected it.
Most people, 76 percent, trust the tech sector, making it the most trusted business sector globally. However, “Tech is losing its trust lead,” going from most-trusted in 90 percent of markets surveyed in 2016, to about half the markets this year. “This is a precarious moment because people are more uncertain about the sector just as AI becomes ubiquitous,” stated the author of the report, Margot Edelman, in an account from the World Economic Forum. She is the US tech sector co-lead for Edelman, a public relations and communication services firm.
Over the past five years, trust in AI companies has declined from 62 percent to 54 percent of respondents. “AI has a more negative reputation than technology writ large,” the report stated. The top reasons cited for a decline in trust of AI are concerns about privacy, inadequate testing and evaluation before being released to the market, and the potential harm AI could cause to people and society. “People’s fears are more related to fundamental societal issues than personal concerns,” Edelman stated.
She had several suggestions for how AI companies can build trust in AI. “Earning broad trust in and acceptance of AI may seem like a long road ahead, but it is possible,” she stated.
Data & Trust Alliance Formed, Issues Provenance Standards
Outside the PR front, on the technology front, efforts are going into building trust in AI by focusing on data provenance, data management practices to help ensure data integrity and reliability. Toward that end, a Data Provenance Standards initiative was announced last November by the Data & Trust Alliance, a group of 19 tech organizations aiming to verify data trustworthiness.
The timing of the initiative is about a year out from the release of ChatGPT by OpenAI, a release that put the ethical use of AI in the spotlight, as it has turned out. The Alliance is a nonprofit, cross-industry consortium aiming to develop practices for the responsible use of data and AI.
“As businesses scale and accelerate the impact of AI with trusted data, it is necessary to ensure the technology is developed and deployed responsibly,” stated Rob Thomas, a senior VP with IBM who is chair of the initiative, in an account in CMSWire. “These practical data provenance standards, co-created by senior practitioners across industry, are designed to help ensure AI workflows are compliant with ever-changing government regulations and free of bias.”
The group identifies eight data provenance standards: lineage, source, legal rights, privacy and protection, generation date, data type, generation method and intended use and restrictions.
The group’s mission encompasses marketing as well; it is not a pure technical effort. The group sees the low levels of trust in AI as a red flag, and it hopes the emphasis on data provenance will translate to increased customer trust in AI-driven products and services. "These standards will help ensure the data driving campaigns and content is accurate, reliable and legally sourced,” stated Kristina Podnar, senior policy director for the Data & Trust Alliance. She sees an emphasis on data provenance as leading to “greater trust with audiences.”
Third-party data providers need to produce metadata associated with the suggested standards to provide a view into the data provenance, and to help themselves to become aware of the origins and implications of the data they use, Podnar recommends.
"Doing so will help safeguard the integrity of their campaigns, protect the brand, and address the growing consumer demand for transparency and ethical data practices," she stated.
The Data & Trust Alliance issued a policy roadmap for AI governance on June 13, urging AI developers, those who deploy AI, governments and civil society to play active roles in ensuring trustworthiness of AI systems.
“Everyone has a role to ensure AI is used responsibly—companies, governments and developers,” stated Joann Stonier, a Mastercard fellow and cochair of the alliance’s policy committee. “At the heart of these recommendations is the commitment to manage risks—not restrict use—and spark continued investment and innovation.”
Read the source articles and information in the 2024 State of Reliable AI Survey from Monte Carlo, an account of a survey on AI trustworthiness published by tech.co, from the World Economic Forum, in CMSWire and a policy roadmap from the Data & Trust Alliance.
.