The World’s First Ethical AI Company Has Arrived
Karya pays its workers in India $5/hour minimum, well beyond prevailing wage for similar work in developing countries; content moderators in Africa vote to unionize
By John P. Desmond, Editor, AI in Business

To take the position of the “world’s first ethical AI company” takes some brass, and in the case of Karya, of Kalbadevi, Mumbai, India, it might be difficult to dispute.
Founder Manu Chopra is testing a new business model for the company he founded as a nonprofit in 2021. He is winning contracts from big tech companies to train large language models in the language of Kannada, spoken by some 60 million people in central and southern India. It’s better for large language models to be versed in local dialects, and Kannada is not well represented on the internet. It is one of 22 official languages and some 780 more indigenous ones spoken in India, a nation of 1.4 billion people.
According to a recent account in Time magazine, large “arbitrage” companies pay wages close to the legal minimum and then sell data to the foreign clients for a hefty markup. Little of the many billions being projected to be spent in the AI data sector globally is flowing back to workers in poor countries.
Chopra’s idea was to sell data to big tech companies and other clients at the market rate, cover his costs, then distribute the rest to the rural poor in India. Karya partners with local non-government organizations to identify individual workers from historically marginalized communities. They get paid a $5 hourly minimum, which is a game-changer for them.
“The wages that exist right now are a failure of the market,” stated Chopra, the 27-year-old CEO of Karya. “We decided to be a nonprofit because fundamentally, you can’t solve a market failure in the market.”
The work is supplementary for the workers; once they earn the equivalent of $1,500, the average annual income in India, they move aside for someone else. The company reports it has paid nearly $800,000 to 30,000 rural Indians, and by 2030, Chopra hopes to reach 100 million people.
“I genuinely feel this is the quickest way to move millions of people out of poverty if done right,” stated Chopra. He was born into poverty and won a scholarship to Stanford University that set his direction. “This is absolutely a social project. Wealth is power. And we want to redistribute wealth to the communities who have been left behind,” he stated.
While other companies are positioning to help the poorest people in the world, results vary. (Read In the Trenches of Reinforcement Learning with Human Feedback, August 18, 2023, AI in Business)
Karya’s clients include Microsoft, MIT and Stanford, and the firm began a new project in February for the Bill and Melinda Gates Foundation to build voice datasets in five languages spoken by some one billion Indians. The goal is to build a chatbot that can answer the questions of rural Indians in their native language.
Research of Chopra and his CTO Vivek Seshadri confirmed that the language dataset work could be done with high accuracy and no training from a smartphone, without workers needing the ability to speak English. With a grant from Microsoft Research, the two joined with a third cofounder, Safiya Husain, to start the company.
The founders bring something to the table that might be unobtainable to the founders of big tech companies. Using an approach described by a Hindi word derived from Indian classical music - ”thairaav” - translated as a mix of “pause” and “thoughtful impact,” it allowed Chopra to pause and think about big questions: “Am I doing the right thing? Is this right for the community I am trying to serve?”
This approach “is just missing from a lot of entrepreneurial ‘move fast and break things’ behavior,” he stated to Time. He will not accept contracts for content moderation, for example, that would require workers to view traumatizing material.
He is looking for more work. “What we need is large-scale awareness that most data companies are unethical,” Chopra stated, “and that there is an ethical way.” He hopes to win business from more tech companies, governments and academic institutions.
Content Moderators in Africa Vote to Unionize
On Labor Day this year, about 150 content moderators for Facebook, TikTok and ChatGPT sat in a hotel in Nairobi and voted to unionize. “We want to be united because if we’re united, we become strong,” stated Mophat Okinyi, in a recent account in The Christian Science Monitor. He worked to screen content for pedophilia and incest for ChatGPT for $1.50 an hour, work he said left him traumatized.
The content moderators are hoping to force tech corporations to provide adequate mental health care and fair pay. “We’re trying to make this job safe for those who will do it in the future and those who are doing it right now,” Okinyi stated.
Facebook parent company Meta has been facing lawsuits brought by content moderators in Kenya, accusing the company of union busting, wrongful termination, and insufficient mental health support. Meta in one case contended it was not the employer of the moderators; the court ruled Meta was the “true employer,” a decision Meta is appealing, the Monitor account reported.
But the decision attracted some attention. “That’s the most significant labor rights decision about content moderation I have seen from any court anywhere,” stated Cori Crider, co-founder and director of Foxglove, a London-based nonprofit offering legal advice to the moderators. “If Facebook is held the true employer of these workers, then the days of hiding behind outsourcing to avoid responsibility for your critical safety workers are over,” Crider stated.
Odanga Madung, a senior researcher at the Mozilla Foundation in Nairobi, has a similar opinion. He is focused on the impact of tech platforms in Africa. “Irresponsibility has always been good business in the capitalist contexts," he stated. “Taking care of people is expensive, more so if you're exposing them to graphic content on behalf of your users.”
Africa is not the only location for content moderator organizing efforts. In Germany, Berlin-based trade union Verdi has been helping content moderators for TikTok and Facebook to unionize.
Okinyi joined Sama in Nairobi in 2019, his first job after graduating from a university. He had done data labeling, product classification and other tasks or foreign tech companies before the ChatGPT work started. He would read some 700 texts per days about child sexual abuse and flag them according to their severity. The work enables ChatGPT to better filter out harmful requests. Sama provided counselors, but the workers had no time to see them.
Sama stopped doing work for ChatGPT in March 2022, and the company stopped doing similar work for Facebook in March 2023.
Mercy Mutemi, a lawyer at Nzili and Sumbi Advocates, the law firm suing Meta in several cases in Africa, called the Labor Day vote of the content moderators to unionize a turning point. “Moderators have faced unbelievable intimidation in trying to exercise their basic right to associate,” she stated in a separate account in Time. “Today they have made a powerful statement: their work is to be celebrated. They will live in fear no longer. Moderators are proud of their work, and we stand ready to offer the necessary support as they register the trade union and bargain for fair conditions.”
Read the source articles and information in Time magazine, in The Christian Science Monitor and again in Time magazine.
(Write to the editor here; tell him what you want to read about in AI in Business.)
Click on the image to buy me a cup of coffee and support the production of the AI in Business newsletter. Thank you!