AI Warning Labels, Bias Bounties Aim at Identifying and Managing Unfair AI
Efforts to examine the impact of potentially discriminatory outcomes moving to more to a socio-technical context, bringing in a wider range of stakeholders
By John P. Desmond, Editor, AI in Busines

Cautionary tales of AI use have emerged on two fronts recently: whether AI systems should come with warning labels, like cigarettes, and whether a bounty could be offered for detection of bias in AI algorithms – a bias bounty in effect.
Both are testaments to the power of AI to change things, for good or not so good results.
The question of warning labels arises from the extensive use of machine learning and deep learning techniques, which fundamentally are based on pattern matching. If the humans making the decisions fitting the pattern that finds its way into training data were exhibiting bias, chances are good that the data will reflect this.
“Furthermore, the AIU developers might not realize what is going on,” stated Lance Eliot, AI expert and prolific author, writing recently in Forbes. “The arcane mathematics in the ML/DL might make it difficult to ferret out the now-hidden biases,” he stated. It could be a case of the age-old computer geek statement that, “garbage in-garbage out,” meaning flawed input yields flawed output.
He suggested a helpful hierarchy to describe ranges of severity:
AI Danger sign: AI of the highest concern and indicative of an immediate hazard with likely severe consequences
AI Warning sign: AI indicative of a hazard that is less than “danger” but still quite serious
AI Cautionary sign: AI indicative of a potential hazard that is less than a “warning” but still indeed earnest
AI Notification sign: AI indicative of an informational heads-up that is less than “caution” but still entails worrisome aspects
He also suggests that we would need warning signs for product-specific AI and service-oriented AI. At the same time, he acknowledged it is unlikely these warning signs would come about from an altruistic motive.
“The chances are that until we have a somewhat widespread realization of AI doing bad deeds, there will not be much of a clamor for AI-related warning messages,” he stated.
AI Audits on the Upswing
Spurring the idea that AI could come with warning signs, is an increased use of AI audits. It was a 2018 audit of a commercial facial recognition system by AI researchers Joy Buolamwini and Timnit Gebru that found bias in the system's ability to recognize dark-skinned people compared to white people, reminds a recent account in MIT Technology Review. The error rate was up to 34 percent.
This led to a body of critical work and actions that today are having an impact. In New York City, a new law to take effect in January 2024 will require all AI-powered hiring tools to be audited for bias. And the European Union in 2024 will require big tech companies to conduct annual audits of their AI systems; their upcoming AI Act will require audits of “high-sick” AI systems.
No common understanding exists for what an AI audit should contain; the few audits that happen today are ad hoc and vary in quality, according to Alex Engler, who studies AI governance for Brookings. Furthermore, few are qualified to conduct an AI audit. To address this, the AI community has begun to put bias bounty competition in place, similar to cybersecurity bug counties, where developers are invited to create tools to identify and address algorithmic biases in AI models.
Bias Bounty Competitions Help to Train AI Auditors
One such competition was recently launched by Rumman Chowdhury, former ethical AI lead at Twitter, who was among those let go by the company in recent weeks.
One idea of the bounty competitions is to help develop a new group of experts who are skilled in auditing AI, since much of the limited scrutiny so far comes from academics or tech companies themselves.
“We are trying to create a third space for people who are interested in this kind of work, who want to get started or who are experts who don’t work at tech companies,” stated Rumman Chowdhury, who was then director of Twitter’s team on ethics, transparency, and accountability in machine learning, and a co-founder of the ‘Bias Buccaneers’ self-described group offering bias bounties. The newly-developing segment could include hackers and data scientists who want to learn a new skill, she stated.
The effort is “fantastic and absolutely much needed,” stated Abhishek Gupta, the founder of the Montreal AI Ethics Institute, who was a judge in Stanford’s AI audit challenge. “The more eyes that you have on a system, the more likely it is that we find places where there are flaws,” Gupta stated.
Meanwhile, Buolamwini and other AI ethicists from the Algorithmic Justice League recently released a paper entitled, “Who Audits the Auditors,” outlining recommendations for better defining what goes into an AI systems audit. The authors describe the paper as providing “the first AI audit field scan of the AI audit ecosystem.”
Tracking when AI does harm is another approach. The ASI Vulnerability Database and the AI Incidents Database have been built by volunteer AI researchers and entrepreneurs. Tracking failures can help developers gain a better understanding of why unintentional failure happens in AI models, stated Subho Majumdar, formerly of Splunk, now a machine learning scientist at Twitch. He is the founder of the AI Vulnerability Database and one of the organizers of the bias bounty competition, according to another account in MIT Technology Review
Recently, the competition announced a winner. Bogdan Kulynych, a graduate student at public research university École Polytechnique Fédérale de Lausanne in Lausanne, Switzerland, was named a winner of Twitter's algorithmic bias bounty competition at a recent DEF CON security conference. He received $3,500 for his work examining the image cropping algorithm of a social network.
NIST Paper Meant to Spur AI Bias Management Efforts
The National Institute of Standards and Technology (NIST) recently published a paper (1270) entitled, “Towards a Standard for Identifying and Managing Bias in Artificial Intelligence,” coauthored by Reva Schwartz, research scientist with the National Institute of Standards and Technology Information Technology Laboratory
.In its conclusion, the paper stated, “It is clear that developing detailed technical guidance to address this challenging area will take time and input from diverse stakeholders, within and beyond those groups who design, develop, and deploy AI applications, and including members of communities that may be impacted by the deployment of AI systems.”
And, “Since AI is neither built nor deployed in a vacuum, we approach AI as a socio-technical system, acknowledging that AI systems and associated bias extend beyond the computational level. Bias can be introduced purposefully or inadvertently, or it can emerge as the AI system is used, impacting society at large through perpetuating and amplifying biased and discriminatory outcomes.”
Read the source articles and information in Forbes, MIT Technology Review, a paper from the Algorithmic Justice League, another account in MIT Technology Review, and the NIST paper “Towards a Standard for Identifying and Managing Bias in Artificial Intelligence,”
(Write to the editor here.)