AI Bill of Rights Announced; AI Regulation Taking Shape
White House announces an AI Bill of Rights as AI regulation takes shape on horizon and federal government Increases AI R&D contract activity
By John P. Desmond, Editor, AI in Business

The AI Bill of Rights announced by the White House on October 4 is setting up markers for how Washington is likely to regulate AI, if and when such federal regulation comes about, which seems increasingly likely.
The AI Bill of Rights is opt-in and contains no penalties for non-compliance. The AI Bill of Rights outlines five principles, according to an account in The Washington Post:
Users should be “protected from unsafe or ineffective” automated systems, and tools should be expressly “designed to proactively protect you from harms;”
Discriminatory uses of algorithms and other AI should be prohibited, and tools should be developed with an emphasis on equity;
Companies should build privacy protections into products to prevent “abusive data practices” and users should have “agency” over how their data is used;
Systems should be transparent so that users “know that an automated system is being used” and understand how it’s affecting them;
Users should be able to “opt out of automated systems in favor of a human alternative, where appropriate.”
“This document is laying down a marker for the protections that everyone in America should be entitled to,” stated Alondra Nelson, deputy director for science and society of the White House’s Office of Science and Technology Policy (OSTP), to the Post.

Various federal agencies will take steps to put the principles into practice. Among the efforts: the Department of Education plans to issue recommendations on AI use in teaching and learning by early 2023; the Department of Health and Human Services will release “a vision for advancing Health Equity by Design'' by the end of 2022; and the Department of Housing and Urban Development plans to issue guidance on how tenant screening algorithms can violate federal housing laws, according to the White House.
In addition, the Department of Labor is considering requiring employers to report on how they are using workplace surveillance systems that incorporate AI, Nelson stated.
To make systems equitable, the White House is encouraging companies to use “representative” datasets so that the AI system is fair across users, and to conduct “ongoing disparity testing and mitigation” before and after tools are rolled out.
“This effort is really the White House saying that civil rights need to be at the center of how we make and use and govern technologies,” stated Nelson.
Many States Considering or Have Passed AI Regulation Bills
The US Congress and state legislators have been exploring ways to regulate AI, out of concern about potential misuse or unintended consequences.
In 2022, general AI bills or resolutions were introduced in 17 states and enacted in four states, with three states creating task forces to study AI, according to the National Conference of State Legislatures.
It’s only a matter of time before one or more of these gains traction. California, for example, has proposed legislation to make it a crime for a social media company to knowingly cause a child user to become addicted to the platform (CA A.B. 2408). Illinois has enacted the AI Video Interview Act (IL H.B. 53), which provides that employers using AI to determine whether applicants qualify, must report certain demographic information to the state to ensure protection from racial bias.
Massachusetts has a bill pending (MA H.B. 4152) that would require data controllers to disclose automated decision-making systems, including profiling, and to provide meaningful information about the logic involved and the predicted consequences.
A number of proposals have failed to progress. In Maryland, an effort to establish the Technology and Science Advisory Committee (MD H.B. 1359) to study algorithmic decision system policies and practices, was voted down. It would have established work groups to study and make recommendations for responsible AI, and it would have defined a code of ethics for AI use.
In New York, legislation (NY S.B. 6701) is pending to require that New York consumers be provided clear notice of how their data is being used, processed and shared. Consumers would also be entitled to obtain a copy of their data in a commonly used electronic format, to delete their data, and to challenge certain automated decisions.
AI is Spreading in the Federal Government
Meanwhile, the federal government is expanding its use of AI systems. The White House’s AI Bill of Rights might have been as much internally focused as externally.
A recent analysis of federal contracts for AI systems from Brookings researchers shows high activity. The researchers compiled a database of all federal contracts issued in the past five years with the term AI in the contract description. The dataset included 663 contracts, of which 652 identified a funding agency, and 556 identified an NAICS (North American Industry Classification System) code. The contract values ranged up to $192 million.
The findings showed that most of the contracts were for professional, scientific and technical services, with over $1 billion spent in 474 awards over the last five years. “This is not surprising,” state the report authors, led by Gregory Dawson, an associate professor of business at Arizona State University.
The second most frequently used code was for manufacturing, with 16 unique awards valued under $7 million. Overall spending for AI R&D has been growing at a double-digit rate for several years. “This finding suggests an immature market which is focused on the development of research rather than a more mature market which would be focused more on hardware and software devices,” the authors stated.
More than 50 percent of the federal contracts and 87 percent of the contract value came from the Department of Defense. Health and Human Services and NASA are a distant second, with 17 percent each. Commerce and the Veterans Administration are the next largest by contract spend, at some four percent each
“Taken together, we see a highly-fragmented market which is dominated by smaller vendors who generally have a single contract for AI-related services,” the authors stated. Many of the suppliers were located in close proximity to the agencies they served, suggesting prior relationships.
Larger firms including RAND, Northrop Grumman, Accenture, IBM, Booz Allen Hamilton and Raytheon were absent from these contracts, but the analysts expect that to change. “The larger firms are still establishing beachheads where AI could possibly fit,” they stated.
The opportunity for smaller firms is big right now. “The federal government is still very much in an experimental phase of purchasing AI and is likely looking for specific use cases where AI is appropriate,” the authors stated.
Read the source articles and information in The Washington Post, from the National Conference of State Legislatures and from Brookings.
(Write to the editor here.)