Doomsday: AI at War with Humanity
The natural language processing (NLP) that underpins ChatGPT was a breakthrough in AI, allowing commands to be given to the machine in a typical syntax without the need for code. Computer vision has made significant progress in recent years, allowing AI to detect and identify objects in the same way humans do. With such assistance, cybersecurity, combat simulation, threat monitoring – war in general – takes on a completely different image and promises to create super-efficient machines of mass destruction.
AI on guard of order
Artificial intelligence is applicable in various areas of military affairs: from planning operations to transporting troops, from training personnel to providing them with medical care. In such conditions, weapons, sensors, navigation, aviation support and surveillance become incredibly effective with minimal human involvement in the data processing process, which reduces the likelihood of the so-called human factor and the resulting error.
One of the most promising military developments using AI is swarm intelligence for drones. Receiving important information, the UAV transmits it to other drones in the swarm, which makes a potential attack much more effective compared to a single flight. In this case, the swarm is focused on a common goal, and the drone receiving the information is able to act autonomously and, most importantly, creatively, making decisions depending on the position of the target. Like bees, AI copters are able to communicate all the data necessary for the swarm's life: distance, direction and height of the target, as well as data on any potential threat. This technology crosses a critical threshold in military affairs, as it ensures that the target is hit by collective intelligence, significantly reducing the chances of a counterattack.
Processing large amounts of data takes a lot of time, but AI capabilities allow it to quickly filter it and select the most valuable ones. This can allow military personnel to more effectively identify patterns, draw more accurate conclusions, and create action plans based on the full picture. Finding valuable information in large amounts of information that may escape human attention is what generative AI is designed for. With their natural language processing capabilities, such models can convey information in a conversational manner and engage in dialogue with people to explain it.
AI can also be used to filter large amounts of content from news feeds and social media to identify new information. This saves analysts time because the system can filter out repetitive and inaccurate information, which in turn streamlines the research process, making analysts’ conclusions more accurate.
AI algorithms can collect and process data from many different sources to help make decisions in stressful situations. However, there is an ongoing debate about the suitability of artificial intelligence for complex ethical dilemmas. Humans rely not only on dry facts, but also on their own experience, intuition, and even prejudices. There is a danger that a machine trained to improve itself based on its own data will be influenced by false and destructive ideas, as in the case of Microsoft’s Tay chatbot, which began insulting people on Twitter less than a day after its launch in 2016. Along with all the benefits, the threat of the system “going crazy” remains acute. In February 2024, while trying to test the new version of GPT-4 Turbo, the chatbot began producing a series of incoherent words. The creators of language models admit that their creations are a black box: possessing unimaginable potential, they carry unexpected destructive power. AIs analyze data, identify patterns, and then make predictions about how to respond to requests. Schizophasia, hallucinations, stroke—all the ways we would explain a person’s loss of speech remain unexplored and incomprehensible areas for AI. The only explanation we can come up with is that the system somehow goes out of context and loses its effectiveness.
The Retooling Race
In 2023, the Pentagon’s Policy Office released updated guidelines for the development and operation of autonomous weapons. The Autonomy in Weapon Systems directive was originally issued in 2012 as guidelines and responsibilities for the development, testing, and use of these systems. But a decade later, as AI has exploded, the military has reconsidered. The new policy does not prohibit certain autonomous capabilities, but instead mandates that new systems undergo a comprehensive testing and review process. The update also calls for the creation of a weapons systems working group. Led by the undersecretary of defense for policy, the group will advise senior Defense Department leaders as they consider approving new systems.
According to the Brookings Institution, the U.S. government’s spending on artificial intelligence has increased sharply since the policy was adopted, driven by increased military investment. The potential value of federal AI contracts has increased by nearly 1,200% to $4.6 billion, driven by intense technological competition between the United States and China. The superpowers are developing their industrial capacities to create semiconductor chips, without which the development of powerful models of artificial intelligence is impossible. As a hedge, the United States has imposed export restrictions to prevent China from acquiring advanced chips.
Rather than defining AI-based warfighting capabilities, the U.S. strategy aims to strengthen organizational environments in which military personnel can continuously use data analytics to gain a sustainable decision-making advantage.
While the US military has significant experience with AI, it only covers training simulations. Intelligent systems prepare soldiers to operate combat systems used during operations, creating a “war game” that is used to train soldiers. The simulation provides soldiers with realistic missions and tasks to train them before they have to apply their skills in real situations.
AI-powered language models can read sources and use them to create training materials, notes, and tests. AI can also assess students’ current abilities and tailor training to their specific needs, explaining the material like a human teacher. By analyzing large amounts of intelligence data, records of previous combat operations, AI can create more comprehensive training programs, including detailed military simulations. However, the military understands that excluding human instructors from the soldier training process would be a critical mistake, since it is necessary to constantly monitor the correctness of the AI, its “understanding” of requests, and the adequacy of the information it outputs.
In contrast to the US, which has only just announced its intention to equip its armed forces with the latest in intelligent technology, China has been engaged in a national campaign for several years to allocate resources to AI development in both the public and private sectors. The People’s Liberation Army (PLA) is using AI to develop unmanned intelligent combat systems, improve situational awareness on the battlefield, conduct multi-domain operations, and advance training programs. As of 2018, there were 4,040 AI companies in China.
China's largest internet giants Baidu, Alibaba and Tencent have collectively invested $12.8 billion in the AI industry, surpassing the combined total of the top four US firms.
Thanks to the national civil-military fusion strategy, the PLA has been able to quickly exploit the latest advances in civilian technology. In October 2019, the WZ-8, a high-speed, long-range reconnaissance drone, and the Sharp Sword-11, a large stealth strike drone, were unveiled during the Independence Day military parade. The HSU-001 unmanned autonomous underwater vehicle was also on display. The potential of AI to identify enemy defense weaknesses and improve daily planning by enhancing situational awareness capabilities is being actively explored.
Russia has experience in developing intelligent systems since the 1970s, when it first developed the P-700 Granit missile, which can determine the composition and size of a group target, as well as identify the type of naval detachment and identify the main objects in it. In 2019, the Uran-9 RTK complex, controlled by AI, was adopted into service. It moves autonomously along a given route, conducts surveillance, searches for and hits targets, but the decision to open fire remains with the operator. A promising development is the S-70 Okhotnik UAV, capable of both independently performing tasks and under the control of a manned fighter. Currently, a private company is testing software that can turn any drone into a "smart swarm." In February, President Vladimir Putin announced the serial production of promising models and the introduction of artificial intelligence technologies in the military sphere.
On the brink of chaos
Since the main problem of creating fully intelligent military systems rests entirely on creating the production capacity of semiconductor chips, no country in the world can boast of real-life deployment of high-tech AI in real combat conditions. However, the first thing that researchers are focusing on is the ability of artificial intelligence to improve the efficiency of research, development, and production. Its connection with chemistry and biochemistry is particularly alarming. The ease and speed with which new pathways to existing toxic compounds can be identified worries the Organization for the Prohibition of Chemical Weapons. The same applies to biotechnology: instead of conducting lengthy and expensive laboratory experiments, AI can “predict” the biological effects of known and even unknown agents. In 2022, Philip Lentzos of the Department of War Studies at King's College London described an experiment in which artificial intelligence “generated forty thousand molecules” in less than six hours that were more toxic than commonly known chemical warfare agents.
Nuclear weapons development could be made easier because AI could potentially provide detailed instructions. The only thing that separates us from such a prospect is an instruction from AI developers not to respond to such requests. In this regard, the real problem is no longer so much the development of weapons of mass destruction using AI, but the creation of AI that is not limited by ethical standards.
The data on which AI operates can also be used for military purposes. Unreliable information or manipulation of facts creates an additional threat. In addition, artificial intelligence will be focused on military goals, and not on avoiding direct conflict, which calls into question the objectivity of the decisions made by the machine. The only way out that is seen in scientific circles today is to limit the introduction of AI into weapons of mass destruction systems. However, the questions remain open as to whether the implementation of such a decision will be controllable, how to prevent attempts at uncontrolled generation of AI, and how blurred the boundaries of human ethics will become in a war of machines.