Kissinger Warns of Danger of Using AI in Warfare
Efforts to clean space junk increasing risk of confrontations; military AI startups are hot; and Henry Kissinger at a recent Washington, DC forum said superpowers need to talk
By John P. Desmond, Editor, AI in Business

On the occasion of his 99th birthday, diplomat Henry Kissinger, who has been at the table during negotiations of international nuclear weapons treaties in place today, is concerned about the destructive potential of AI in warfare.
Kissinger believes that the US and China should be holding high-level talks about such limits, according to a recent column in The Washington Post.
The war in Ukraine had increased the urgency for many nations to push more AI onto the battlefield.
Kissinger spoke by video connection during a recent forum on arms control at the Washington National Cathedral. Without ways to limit AI in warfare, “it is simply a mad race for some catastrophe,” he is quoted as stating. The session was entitled, “Man, Machine and God,” the topic of the annual Nancy and Paul Ignatius Program named in honor of the parents of the author of the Washington Post column, David Ignatius.
Elaborating on his concern, Kissinger stated, “We are surrounded by many machines whose real thinking we may not know … How do you build restraints into machines? Even today, we have fighter planes that can fight … air battles without any human intervention. But these are just the beginnings of this process. It is the elaboration 50 years down the road that will be mind-boggling.”
Kissinger called on the leaders of the United States and China to begin a dialogue about how to apply ethical limits and standards for AI. He emphasized urgency.
Kissinger suggested that President Joe Biden state to Chinese President Xi Jinping that “you and I uniquely in history can destroy the world by our decisions on this [AI-driven warfare], and it is impossible to achieve a unilateral advantage in this. So, we therefore should start with principle number one that we will not fight a high-tech war against each other.”
China has resisted participating in nuclear arms control talks similar to those Kissinger conducted with the Soviet Union when he was national security adviser and Secretary of State. Asked if he was optimistic that the superpowers could be able to limit the destructive potential of AI in warfare, Kissinger stated: “I retain my optimism in the sense that if we don’t solve it, it’ll literally destroy us. … We have no choice.”
Efforts to Clean Space Junk Seen Increasing Risk of Confrontation
The concern about the use of AI in warfare is extending to concerns about the use of automation to service an increasing number of satellites in orbit, and to help clean up space junk.
The number of working satellites in Earth orbit doubled to some 6,000 in the last two years, and those are sharing space with 26,000 pieces of space junk, including 3,000 dead satellites, according to a recent account in Bloomberg. Collisions have happened and the potential for more is high.
“Satellite repair programs have accelerated as the dangers and opportunities mount,” stated Adam Minter, author of the Bloomberg account and of the 2021 book, “Secondhand Travels in the New Global Garage Sale.” For example, Northrop Grumman Corp. in 2020 sent a small MEV-1 spacecraft with 15 years of fuel to hook up to an older communication satellite, to provide auxiliary power needed to guide the aged out unit into an orbit where it can do no harm. As of last Spring, some 30 companies were engaged in satellite service and life-extension efforts.
Department of Defense unit DARPA has a longstanding research program to produce a robotic repair arm for use in space. Sometime after 2025, NASA plans to launch a repair robot to cut into a satellite and insert a refueling line, The European Space Agency, Russia and the Chinese space program are each working on their own technologies.
“As important and exciting as these advances are, they're also potentially dangerous,” Minter stated. “Technologies designed to extend the life of a satellite can also be used to destroy one; if the intent behind a mission isn't clearly spelled out, suspicions will rise.”
That’s why talks about limits to the use of AI in warfare would be a good idea.
Satellites operated by SpaceX, the Elon Musk company, have played an important role in providing communications infrastructure to the Ukraine military in its war with Russia. A Russian official recently warned the UN General Assembly that this “quasi-civilian infrastructure” could become a target for retaliation. Minter advocated for “the slow and patient development of global norms for operating satellite-servicing missions.”
AI for the Military is a Hot Market
Meanwhile in capitalism, two weeks after Russia launched its invasion of Ukraine in February, Palantir CEO Alexander Karp pitched European leaders in an open letter about using his company’s powerful data analysis software to support their militaries, according to an account in MIT Technology Review. Palantir’s predictive modeling software can be used to forecast required equipment maintenance and assess force readiness, for example.
Karp stated that European countries needed to “remain strong enough to defeat the threat of foreign occupation,” such as by using his software.
NATO is encouraging its members to invest in early-stage startups developing AI, big-data processing and other automation technology, with help from a $1 billion innovation fund. Since the war in Ukraine started, the UK has launched an effort to pursue AI specifically for defense, and the Germans are budgeting about half a billion dollars for research into AI, boosted with a $100 billion injection of cash to the military, according to the MIT Tech Review account.
“War is a catalyst for change,” stated Kenneth Payne, professor of strategy concentrating on defense studies research at King’s College London and author of the book “I, Warbot: The Dawn of Artificially Intelligent Conflict.”
Ethical concerns about the use of AI in warfare are long-standing. Google in 2018, following employee protests, pulled out of the competition for the Pentagon’s Project Maven, which was focused on image recognition to improve drone strikes. Some AI scientists have pledged not to work on lethal AI, including Yoshua Bengio, a Canadian computer scientist known for his work on AI neural networks and deep learning. He is a winner of the Turing Award, seen by some as the “Nobel Prize of Computing.”
Nevertheless, the Pentagon is looking at AI startups to help its cause, as evidenced in the hiring in April of Craig Martell as head of the new Chief Digital and AI Officer, which will try to coordinate the use of AI in the US military. AI Martell is the former head of machine learning AI at Lyft, the ride-hailing company.
It’s working for some companies. Anduril, a five-year-old startup, in February won a $1 billion defense contract to help develop autonomous defense systems, such as sophisticated underwater drones. In January, ScaleAI, a startup providing data labeling services for AI, won a $250 million contract with the US DOD, according to Tech Review.
Read the source articles and information in The Washington Post, in Bloomberg and in MIT Technology Review.
(Write to the editor here.)