Artificial Intelligence (AI) is here or close to being here now and artificial superintelligence (ASI) will be here in two years (2027) according to the “AI Futures Project”, which is a nonprofit forecasting the future of AI. Wait! The AI we see does some amazing things but I am not certain I would call it intelligent. We have been disappointed. However, you can argue that the AI we see out in public is several months behind what is in the labs. The AI Futures Project says the world will be totally changed by 2030. It will then be an AI economy.
This change, if correct, is happening much faster than I had imagined. But I can not deny the work they put into creating this forecast. (Here is a summary of the results of their work.) So perhaps I need to rethink my reasoning. But there are some assumptions that if not true can change that forecast. For the forecast to work, one needs a whole lot of computing power. That is a significant limitation. Also the forecast requires a couple breakthroughs that will move AI forward. Will that happen? Who knows. They also assume that AI growth is exponentially increasing. Several of their forecast models come up with AI becoming like a superhuman coder that will automate the coding of AI which means AI will be improving itself without human coding. The forecast has that happening in 2027. Note I think there is also an implicit assumption that knowledge is equivalent to intelligence if you know how to process the information. That requires wisdom or common sense which I wonder how that will be learned. I have known people with lots of knowledge and little common sense.
With AI or ASI rapidly improving itself, there will need to be tests that run to make certain things are improving the way we want. Since the ASI is improving at a very fast rate, it is hard for us humans to keep up, and yet we must make certain that the ASI is passing the tests for the right reasons. We do not want any twisted thinking or passing the tests by putting the answers in the code just to pass the tests.
The article describes two possible scenarios, one where we are in a race with China and do not stop to evaluate the ASI but instead trust that ASI is working correctly, and the other scenario is where we slow down the AI development to correct any “misalignment” or errors that have crept in. (Note there are more than just two possible scenarios.) In the race scenario, ASI takes over and kills the human race and it goes out to explore the universe as robots. In the slowdown scenario, ASI remains a servant of us humans, and we go out to explore the universe. All this happens by 2030.
If true, this very rapid advancement of AI concerns me because we are not ready for it. It is also open for abuse. We need guardrails to direct the ASI and prevent abuse. We need oversight so that the ASI ends up working correctly for our good. It is possible that a small group of people (or one person) can gain control of the ASI and use it for their benefit, whether it be conquering the world or getting extremely wealthy. It is also possible that the ASI ends up controlling or eliminating us. ASI will change our economy and our society. ASI will take over most jobs and do them more efficiently, using robots for manual labor. The slowdown scenario says we then will have a basic income and free time for other activities. (I ask, “Is this a good thing?”)
How can we prepare for a big change like this? I do not know, but let us be aware of the changes happening around us. I do not think it will bring utopia like some want. We are too broken for that. We corrupt everything we touch and we will make a mess of AI too. But can we make it into a powerful tool that will be used mostly for good? I think it is possible, but I suspect there will be a lot of chaos on the way there.