For the better part of a century, science fiction (I, Robot, Terminator, The Matrix, etc.) has related stories about ultra-intelligent machines that become smarter that people and do some combination of try to kill everyone, take over, and/or treat people as pets.
I won't say that a scenario like that will never happen. Repeated experiments have pushed my personal perception of what machines can do. However, given how often Siri screws up my questions - I will say that I think there's other concerns may be more likely to happen first.
A bunch of smart people wrote an open letter to warn about the dangers of ultra-intelligent AI, although if you read the article closely, Bill Gates commented that he thought we were “a few decades” from ultra-intelligent machines.
On the other hand, militaries by definition are constantly trying to one-up their adversaries. As weapon systems continue to advance, they increasingly become complex enough that they require some level of artificial intelligence to control them.
With the military more or less attempting to Keep up with the Joneses, it will be increasingly difficult not to deploy these weapons systems. Indeed, if my city were faced with a credible threat, I’m not sure I’d argue against deploying AI-controlled anti-projectile lasers to maintain safety.
To date, the lack of a major news story on the topic makes it seem like these machines have all been rigorously tested. However, the more these systems are deployed, the more it increases the chances of an accident. Think of what would happen if every time Siri messed up, a passenger jet fell out of the sky.
In the near-term at least, it seems to me that under-intelligent machines would be more likely to cause problems than ultra-intelligent machines.
No comments:
Post a Comment