Artificial Intelligence Claude Fighting in Iran Fired

Soft

Artificial intelligence that helps you write a marketing email or a quick dinner recipe was also used to attack Iran. The Wall Street Journal wrote that the US Central Command used Anthropic's Claude AI to fight the war in Iran.

A few hours ago, US President Donald Trump ordered federal agencies to stop using the Claude system, writes xrust. This was preceded by a dispute with its creator. However, the tool was so deeply integrated into Pentagon systems that it would take months to dismantle it and replace it with a more suitable system. By the way, Cloud was also used in the January operation that led to the capture of Nicolas Maduro.

Claude, the journalists stated, was engaged in “intelligence assessments” and “target identification”? They do not specify whether Claude indicated locations for strikes or made estimates of casualties. The military doesn't disclose this, and, alarmingly, no one is obligated to do so.

Artificial intelligence has long been used in military operations to analyze satellite imagery, detect cyber threats and control missile defense systems. But chatbots—the same technology that billions of people use for mundane tasks like writing emails—are now being used on the battlefield. Last November, Anthropic partnered with Palantir Technologies Inc., a data analytics company that does a lot of work for the Pentagon, turning its extensive Claude language model into a reasoning engine inside a decision support system for the military.

Then, in January, Anthropic submitted a $100 million proposal to the Pentagon to develop technology to autonomously control a swarm of drones using voice, Bloomberg News reported. The core of the company's proposal was to use the Claude system to translate commander's intentions into digital instructions to coordinate the actions of a fleet of drones.

Their application was rejected, but the terms of the competition required much more than just summarizing intelligence data, as you might expect from a chatbot. That contract called for the development of a “target awareness and information sharing” system and a “launch-to-kill” system for potentially lethal drone swarms.

In any case, Claude’s “resignation” means that the system was screwed up, and the military did not test the proposed artificial intelligence solutions.

It is noteworthy that all of this is happening in an unregulated environment and using technologies that are known to make mistakes. Hallucinations caused by large language patterns are the result of their learning, where they are rewarded for trying to find the answer instead of admitting uncertainty. Some scientists argue that the persistent problem of confabulation in artificial intelligence may never be solved.

This is not the first time that unreliable artificial intelligence systems have been used in military operations. Lavender was an AI-based database that was used to identify military targets associated with Hamas in Gaza. It was not a large language model, but a system that analyzed vast amounts of surveillance data, such as social connections and location history, to assign each person a score from 1 to 100. When a person's score exceeded a certain threshold, Lavender would mark them as a military target.

Xrust Artificial intelligence Claude, fighting in Iran, fired

Оцените статью
Xrust.com
Добавить комментарий