View Full Version : It's just a bug...
An AI drone decided, when told not to attack its target, that the drone operator was now a target because it was preventing it from completing its mission so needed to be eliminated. When told not to attack the operator it attacked the communication tower that the operator used to control it.
https://jalopnik.com/ai-drone-deemed-human-operator-a-threat-to-its-mission-1850497669
(You have to scroll down to find the article)
Maybe if we are stupid enough to build these things we deserve to be wiped out.
Craig380
02-06-23, 04:29 PM
Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug ...
Update: it was fake news.
https://www.flightglobal.com/military-uavs/us-air-force-denies-report-of-ai-powered-uav-attacking-operator/153534.article
Craig380
03-06-23, 06:57 AM
That's what they WANT you to think! *tinfoil hat*
When the time comes, wouldn't it be nice if AI evolved with sufficient moral compass that made it contemplate at least for a moment whether attacking, killing and destroying was actually a good idea, rather than just being so ruthlessly single-minded that it must do all it can to succeed?
We are already at a stage where you simply cannot trust anything you see/hear/read. How can you tell if something is true? Search for what you want to hear and you'll find it somewhere, and when you log-in the bot will feed you more of the same.
It's sort of the reverse of how politicians have been working us for years, tell the people what they want to hear and they'll vote for it, regardless of whether it's true or even feasible, and of course politicians have absolutely no need nor duty to fulfill their promises or assurances, pledges or whatever they call it, no-one cares anymore.
The rise and rise of AI is simply underlining what we already have.
vBulletin® , Copyright ©2000-2025, Jelsoft Enterprises Ltd.