- Published on
Depressing Reaction to X-Risk Concerns
- Authors

- Name
- Mridul
- @VioletStraFish
The reaction to the x-risk situation confuses me in a weird way. From my point of view, most people are acting irrationally to the point of being borderline insane, and from their point of view, I am the crazy one. But I am thinking through each step carefully, accounting for my ignorance, accounting for the general unpredictability of the future, and I still very firmly believe the decision should be to not proceed with building superintelligences using deep learning (never mind the practical political difficulty - I mean the ideal case).
Yet it appears very smart people think this is okay. In most cases they rely on arguments of the form “how are you sure a superintelligence will do something dangerous?”, whereas you should be asking “how are you sure a superintelligence would not be able to do it?” This flipping of priorities is obvious to me, so I don’t get why more people don’t take this view. When you build a bridge or build a plane, you don't ask "How are you sure it won't fall when cars go over it?"; you proceed only when you're reasonably sure it won't.
Instead I see theories upon theories with weak premises and blurry details about how it will probably be alright. It’s clear the main weight of the intuition comes not from the argument itself but from the background expectation that reality will be boring; or as they say "nothing ever happens". Or maybe it’s overconfidence in both the technical ability of researchers and the wisdom of CEOs. Or maybe it’s a belief that controlling things smarter than humanity is a manageable problem; similar in difficulty, or even easier, than growing the superintelligence itself.
From where I’m standing the situation is crystal clear: we are pouring hundreds of billions of dollars ostensibly to create autonomous intelligent systems that can replicate everything humans do. The headroom in intelligence above that of homo sapiens is quite large. As Deutsch says, if it isn’t closed off by the laws of physics, any physical transformation is possible given the right knowledge. We’re building systems very close to cracking knowledge-creation - this process, once started off, has no limits and for that reason confers limitless power.
When you grow AI using deep learning, they can acquire strange goals different from what they're optimised for. Even with current weak models, we know they tend to preserve goals and hide their motives. We are nowhere near on track to understanding their real goals, much less control them.
Some people in these AI labs have expressed concerns and even signed statements saying that extinction risk should be taken seriously - which include Turing award winners, Nobel Laureates, and all the CEOs of the leading AI labs - yet by and large the norm both within companies and across the field is to only pay lip service to such concerns.
So we’re in a situation where people are explicitly trying to grow extremely powerful systems that are superhuman at scientific research, project planning, security (and consequently hacking), human psychology (and consequently manipulation), and every other human skill you can think of, whose goals are opaque to us and which we have no hope of controlling, while incentives push ever faster toward building them. If this isn’t scary, I don’t know what is.