Practical AI

Wier Gear Photo by Nic Kilby

In 2025, Artificial Intelligence (AI) became more practical.  AI tools have appeared and the hype about it being “intelligent” has lessened.  I believe that it is because the limitations of AI became apparent.  AI is not intelligent.  It is simply a sophisticated statistical pattern matching model with an enormous database that allows it to mimic human intelligence.  And that is why it does not always get things right.  What these AI models can do is very impressive but I believe they are not intelligent.  The model works to give the most probable answer from its vast database and sophisticated model.  It does not reason like humans do.

Apple researchers showed that there is an inability of the AI models to solve logic problems like the Tower of Hanoi. Their conclusion was that the models were pattern matching and not reasoning.  Note there are some who disagree with the conclusion, but it does show that there are some significant reasoning issues with the current AI models.

People realize that there needs to be “theory of mind” reasoning, and there are some small “theory of the mind” reason circuits in the latest AI models, but I feel the pattern matching needs to take a back seat to a much more developed “theory of mind” reasoning, and make it a model of the mind.  Microsoft’s CEO, Satya Nadella, said, ‘We need to … develop a new equilibrium in terms of our “theory of the mind” that accounts for humans being equipped with these new cognitive amplifier tools as we relate to each other.’  Now, in CEO speak, I think he is talking about making AI tools that interact in a more human-like manner.  Note the emphasis is on using the AI tools, not on a future superintelligence.  I think this is because we have a long way to go to real superintelligent AI and Artificial General Intelligence (AGI).  Actually I do not think we can do it.

Besides developing the right model, there are a few other obstacles that have given problems to the AI companies.  First, AI companies need large data centers with lots of computers to do the work.  Computer prices, especially RAM prices, have risen a lot as these companies build new and bigger data centers to train the AI models.  And those data centers require tremendous amounts of electricity to run the computers to develop these AI models, and the newer and better versions will require even more power.   That raises the electricity rates and many times there will not be enough power available.  Another obstacle is that these AI models require a tremendous amount of data to be able to properly train these models.   The problem is much of the data is copyrighted and legally they can not just use the data.  These are some significant obstacles for them to navigate.

I see the ultimate goal of AI as an attempt to create our own creature in our own image, just like how God created us in his image. Some see superintelligent AGI as creating a god that will fix our problems.  However, because of our sin our image is no longer pure as God meant it to be. Our image is broken and I cannot see AI being intelligent without major flaws.  And I think we are a long way off from anything close to being intelligent and able to reason.  Again, I do not think we can actually create an intelligent AI.  Why? Because we are not God, and if we could make a superintelligent AGI in our own image, it would be flawed.  It is my hope that everyone will recognize that AI will always be flawed, because we are flawed.  Fortunately in 2025 many people have moved from the AI hype to creating practical AI tools.

This post was inspired by “From Prophet to Product: How AI Came Back Down to Earth in 2025”.

Super Intelligent AI?

Wier Gear Photo by Nic Kilby

Artificial Intelligence (AI) is here or close to being here now and artificial superintelligence (ASI) will be here in two years (2027) according to the “AI Futures Project”, which is a nonprofit forecasting the future of AI.  Wait!  The AI we see does some amazing things but I am not certain I would call it intelligent.  We have been disappointed.  However, you can argue that the AI we see out in public is several months behind what is in the labs. The AI Futures Project says the world will be totally changed by 2030.  It will then be an AI economy.

This change, if correct, is happening much faster than I had imagined.  But I can not deny the work they put into creating this forecast. (Here is a summary of the results of their work.)  So perhaps I need to rethink my reasoning.  But there are some assumptions that if not true can change that forecast.  For the forecast to work, one needs a whole lot of computing power.  That is a significant limitation.  Also the forecast requires a couple breakthroughs  that will move AI forward.  Will that happen?  Who knows.  They also assume that AI growth is exponentially increasing.  Several of their forecast models come up with AI becoming like a superhuman coder that will automate the coding of AI which means AI will be improving itself without human coding. The forecast has that happening in 2027.  Note I think there is also an implicit assumption that knowledge is equivalent to intelligence if you know how to process the information.  That requires wisdom or common sense which I wonder how that will be learned.  I have known people with lots of knowledge and little common sense.

With AI or ASI rapidly improving itself, there will need to be tests that run to make certain things are improving the way we want. Since the ASI is improving at a very fast rate, it is hard for us humans to keep up, and yet we must make certain that the ASI is passing the tests for the right reasons.  We do not want any twisted thinking or passing the tests by putting the answers in the code just to pass the tests.

The article describes two possible scenarios, one where we are in a race with China and do not stop to evaluate the ASI but instead trust that ASI is working correctly, and the other scenario is where we slow down the AI development to correct any “misalignment”  or errors that have crept in. (Note there are more than just two possible scenarios.)  In the race scenario, ASI takes over and kills the human race and it goes out to explore the universe as robots.  In the slowdown scenario, ASI remains a servant of us humans, and we go out to explore the universe.  All this happens by 2030.

If true, this very rapid advancement of AI concerns me because we are not ready for it.  It is also open for abuse.  We need guardrails to direct the ASI and prevent abuse.  We need oversight so that the ASI ends up working correctly for our good.  It is possible that a small group of people (or one person) can gain control of the ASI and use it for their benefit, whether it be conquering the world or getting extremely wealthy.  It is also possible that the ASI ends up controlling or eliminating us.  ASI will change our economy and our society.  ASI will take over most jobs and do them more efficiently, using robots for manual labor.  The slowdown scenario says we then will have a basic income and free time for other activities. (I ask, “Is this a good thing?”)

How can we prepare for a big change like this?  I do not know, but let us be aware of the changes happening around us.   I do not think it will bring utopia like some want.  We are too broken for that.  We corrupt everything we touch and we will make a mess of AI too.  But can we make it into a powerful tool that will be used mostly for good?  I think it is possible, but I suspect there will be a lot of chaos on the way there.