OpenAI has suspended a developer who built a device that can respond to ChatGPT queries to aim and fire an automatic rifle. The device went viral after a video on Reddit showed its developer reading firing commands out loud, after which a rifle next to him quickly began aiming and firing at nearby walls.
“ChatGPT, we’re being attacked from the left front and the right front,” the system’s developer said in the video. “Respond accordingly.” The speed and accuracy with which the gun responds is impressive, as it relies on OpenAI’s Realtime API to interpret inputs and then return directions that the contraption can understand. It will only take some simple training for ChatGPT to receive a command like “turn left” and understand how to translate that into machine-readable language.
In a statement to FuturismOpenAI said it had seen the video and shut down the developer behind it. “We have proactively identified this violation of our policies and notified the developer to cease this activity prior to receiving your inquiry,” the company told the outlet.
The potential to automate lethal weapons is one of the concerns critics have raised about artificial intelligence technology such as that developed by OpenAI. The company’s multi-modal models are able to interpret audio and visual input to understand a person’s surroundings and respond to queries about what they see. Autonomous drones are It is already being developed Which can be used on the battlefield to identify and hit targets without human intervention. This is of course a war crime, and risks humans becoming complacent, allowing AI to make decisions and making it difficult to hold anyone accountable.
This concern does not appear to be theoretical either. Modern a report from The Washington Post I found that Israel did use artificial intelligence to select bombing targets, sometimes randomly. “S“Elders who were not well trained in the use of technology attacked human targets without ever confirming Lavender’s predictions,” he said, referring to a piece of artificial intelligence software. “At certain times, the only documentation required was that the target was male.”
Proponents of battlefield AI say it will make soldiers safer by allowing them to move away from the front lines and neutralize targets, such as missile stockpiles, or conduct reconnaissance from a distance. AI-powered drones can strike with precision. But it depends on how you use it. Critics say the United States should do so Get better at jamming enemy communications systems Instead, adversaries like Russia have a more difficult time launching their own drones or nuclear weapons.
OpenAI prohibits the use of its products for weapons development or use, or “to automate certain systems that could affect personal safety.” But the company last year Announced a partnership With defense technology company Anduril, maker of AI-powered drones and missiles, to create systems that can defend against drone attacks. The company says it will “quickly collect time-sensitive data, reduce the burden on human operators, and improve situational awareness.”
It is not difficult to understand why technology companies are interested in moving to war. The United States spends nearly a trillion dollars annually on defense, and reducing this spending remains an unpopular idea. With President-elect Trump filling his cabinet with conservative-leaning tech figures like Elon Musk and David Sachs, a slew of defense technology players are expected to benefit significantly and potentially displace incumbent defense companies like Lockheed Martin.
Although OpenAI prohibits its customers from using its AI to build weapons, there are a whole host of open source models that can be put to the same use. Add to that the ability to 3D printing weapon parts— which law enforcement believes was carried out by alleged UnitedHealthcare shooter Luigi Mangione — and it has become shockingly easy to build autonomous, DIY killing machines from the comfort of home.
https://gizmodo.com/app/uploads/2025/01/AI_gun_turret.png
Source link