The world's most famous physicist is warning about the risks posed by machine superintelligence.
Looking further ahead, there are no fundamental limits to what can be achieved: there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains.
So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right?
Wrong. If a superior alien civilisation sent us a message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here – we'll leave the lights on"? Probably not – but this is more or less what is happening with AI.
Although we are facing potentially the best or worst thing to happen to humanity in history. All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks.
No comments:
Post a Comment