Moving Away From “Do No Evil”
Big tech is getting drawn into the use of AI in warfare, especially during recent conflicts in the Middle East; Microsoft and OpenAI assisting the Israeli military
By John P. Desmond, Editor, AI in Business

Moving steadily away from its “do no evil” slogan coined by founders Sergey Brin and Larry Page, Google, now Alphabet, recently modified its guidelines on how it will use AI, dropping a previous section that ruled out applications that were “likely to cause harm”
Groups including Human Rights Watch criticized the decision. "For a global industry leader to abandon red lines it set for itself signals a concerning shift, at a time when we need responsible leadership in AI more than ever," stated Anna Bacciarelli, senior AI researcher at the organization in a recent account from the BBC.
The experience is an example of "why voluntary principles are not an adequate substitute for regulation and binding law,” she stated.
In a blog post responding to the pushback, Google stated that business and democratic governments needed to work together on AI that “supports national security.”
Written by senior vice president James Manyika and Demis Hassabis, who leads the AI lab Google DeepMind, said the company's original AI principles published in 2018 needed to be updated as the technology had evolved.
When Google restructured under the name Alphabet in 2015, the parent company switched the motto to “Do the right thing.”
Pivot Away From Defense Work After Project Maven Protests
Supporting national security opens the door for defense contracts, which Google tried with its experience with Project Maven, a Department of Defense project launched in 2017. At that time, Google staffers pushed back against the company’s acceptance of military defense contracts. In 2018, following resignations and a petition signed by thousands of employees, Google did not renew a contract for AI work with the US Pentagon. Project Maven was a pilot project incorporating AI and vision systems for targeting. Since Google walked away, firms contributing to the project include Palantir Technologies, AWS, Microsoft and others.
More recently, conflicts in the Middle East and Ukraine have demonstrated an increasing role for AI in warfare. After a deadly surprise attack by Hamas militants on Oct. 7, 2023, Israel’s use of Microsoft and OpenAI technology “skyrocketed,” according to a recent report from AP based on internal documents, data and exclusive interviews with current and former Israeli officials and company employees.
“This is the first confirmation we have gotten that commercial AI models are directly being used in warfare,” stated Dr. Heidy Khlaaf, chief AI scientist at the AI Now Institute and former senior safety engineer at OpenAI. “The implications are enormous for the role of tech in enabling this type of unethical and unlawful warfare going forward.”
In March, the Israeli military’s use of Microsoft and OpenAI AI services spiked to 200 times higher than the week leading up to the Oct. 7 attack, the AP reported. Between that time and July 2024, related data stored on Microsoft servers doubled to more than 13.6 petabytes. In the first two months of the war, the use of Microsoft servers by the military rose by nearly two-thirds.
Israeli military sources have called AI a “game changer” for its ability to more quickly define targets. The sources said several layers of humans are in the loop in an attempt to minimize civilian casualties. The system is not foolproof.
Neither the Israeli military nor Microsoft responded to AP’s request for more detailed information about their use of commercial AI products from American tech companies in warfare.
The advanced AI models are provided via OpenAI through Microsoft’s Azure cloud platform, where they are purchased by the Israeli military, AP reported. OpenAI states that it does not have a partnership with Israel’s military. Its usage policies say customers should not use its products to develop weapons, destroy property or harm people. But a year or so ago, OpenAI changed its terms of use that had barred military use, to allow “national security use cases that align with our mission.”
Many US Tech Companies Engaged With Military
Many US-based tech companies now have military contracts. Google and Amazon provide cloud computing and AI services to the Israeli military under “Project Nimbus,” a $1.2 billion contract signed in 2021, AP reported. The military has used Cisco and Dell server farms or data centers. Palantir Technologies, a Microsoft partner in US defense contracts, has a “strategic partnership” to provide AI systems to help Israel’s war efforts, AP reported.
Palantir, Cisco, Oracle and Amazon declined to comment to AP for the story.
An Israeli intelligence officer described how its military is using Azure to quickly search for terms and patterns in massive troves of text, which is cross-referenced with Israel’s own AI systems to help pinpoint target locations. OpenAI’s translation model Whisper, although not specifically identified as being used in this conflict, can transcribe and translate into multiple languages including Arabic. But it can also make up text that no one said, the AP reported.
“Should we be basing these decisions on things that the model could be making up?” stated Joshua Kroll, an assistant professor of computer science at the Naval Postgraduate School in Monterey, California. He spoke to the AP in his personal capacity, not as a spokesman for the US government.

Tal Mimran served 10 years as a reserve legal officer for the Israeli military, and on three NATO working groups examining the use of new technologies, including AI, in warfare. Previously, he said, it took a team of up to 20 people a day or more to review and approve a single airstrike. Now, with AI systems, the military is approving hundreds a week.
He sees risk in an over-reliance on AI for targeting. “Confirmation bias can prevent people from investigating on their own,” stated Mimran. “Some people might be lazy, but others might be afraid to go against the machine and be wrong and make a mistake.”
This week, a protest took place at Microsoft in a meeting chaired by CEO Satya Nadella. Five Microsoft employees stood together on a balcony overlooking Nadella’s podium and revealed tee-shirts they were wearing that said, “Does our code kill kids?” referring to an account in the AP story of an Israeli airstrike that struck a family vehicle, killing three children and their grandmother, according to an account in Data Centre Dynamics.
Presidents Biden, Xi of China, Had Agreed on AI Limits
Diplomatic discussions on the use of AI in warfare may not be likely to happen given the new regime in Washington, but In his final meeting with President Xi of China, former President Joe Biden discussed the topic, according to a report from Politico.
The two leaders agreed to avoid giving artificial intelligence control of nuclear weapons systems, the report stated.
Biden was looking for ways to “emergency-proof” the US-China relationship before Trump was to take over the White House. The agreement on AI was described as a “surprise,” since efforts for the past four years on nuclear safety and proliferation had been rebuffed by China. The country canceled a working group meeting on nuclear arms control in July to protest a US weapons sale to Taiwan.
The agreement on AI committed both countries to ensure “there should be human control over the decision to use nuclear weapons,” stated national security adviser Jake Sullivan to reporters in a post-meeting press briefing.
The Chinese delegation emphasized their desire for “more dialogue and cooperation” with the US and the need to avoid a “new Cold War,” according to the report.
Read the source articles and information from the BBC, in a blog post from Google, from AP, from Data Centre Dynamics and from Politico.