Despite the monumental leaps in artificial intelligence (AI) we’ve witnessed in recent years, the prospect of Artificial General Intelligence (AGI)—machines possessing the ability to understand, learn, and perform any intellectual task precisely as a human can—remains a far-off goal. Yes, advancements have been made with tools like GPT-4, Alpha Go, and Gato, contributing to the foundation of AGI. However, these are still considered early examples of AGI and not fully formed AGI systems.
The journey towards AGI is riddled with significant challenges and roadblocks. The task of developing AGI is compared to the mammoth task of understanding and replicating the complexity of the human brain, which is far from complete despite decades of research in neuroscience, psychology, and cognitive science.
And yet, the leap between GPT-3 and GPT-4 feels significant. It happened in a very short timeframe. We’ve also seen other models achieve equally impressive results, including some open-source AI model efforts. Still, as exciting and fast as things are, AGI is a different beast we are nowhere near solving.
One of the fundamental challenges we face is defining the scope of AGI. It’s crucial to establish the limits of what AGI machines can and cannot do to prevent negative consequences. Alongside this, we grapple with ensuring that these machines adhere to human ethics and morality, a nuanced and complex field in and of itself. Without this, AGI machines could pose real risks to society if they make decisions that do not align with human values, morals, or interests.
The issues extend beyond the technical and into the regulatory. Developing frameworks to govern the use of AGI, including data security standards, ethical behaviour guidelines, and regulations for developing AGI machines, is a challenge yet to be fully met. As AI systems become more intelligent and potentially more autonomous, the question of responsibility arises, particularly if an AGI system causes harm.
But the challenges aren’t just philosophical; they’re also computational. We need significant breakthroughs in machine learning, natural language processing, and computer vision to achieve AGI. We need new algorithms, techniques, and architectures that enable AGI to learn, reason, and adapt similarly to human intelligence. Furthermore, AGI requires access to higher computing resources than currently available, including more processing power, data storage, management, optimized computing architecture, and increased energy efficiency. The development of such resources is resource-intensive and could have significant environmental impacts.
The current roadblock that AI faces is the cost. OpenAI reportedly pays upwards of $700,000 USD daily to run ChatGPT. Can you imagine AI even more powerful, requiring more processing power and resources? You could theoretically be looking at tens of millions a day to run an AGI system at current resource prices.
Despite these hurdles, AGI holds immense potential and could surpass human intelligence and capabilities. It could lead to advancements in machine learning, neural networks, AI, natural language processing, and more. But we need to tread carefully, ensuring responsible development to benefit society and advance human progress.
We will eventually see some form of AGI, and as good as GPT-4 and other models are at making you believe it’s just around the corner, other problems must be solved before we get to that point. Some of these problems might be solved with the aid of AI itself.
In the meantime, enjoy the other possibilities we’ll witness due to AI, particularly scientific and medical breakthroughs. We are already seeing new drug combinations and potential cancer treatments being discovered thanks to AI. Another exciting area is new antibiotics research, where AI can once again reduce the time and failure rate of such research.
One of the problems is calling the current systems AI, they are certainly not Artificial Intelligence within the true meaning of the words. But they are very powerful databases with a very effective associative/pattern matching ability that is so complex and dynamic that it gives the appearance of being AI. But they do suffer from the classic garbage in garbage out syndrome of databases and computer systems.
Intelligence, as in the animal kingdom is now begin consider as something a lot more than just the brain, but involves other biological system that creates the whole creature, some at the conscious level some at the unconscious level. For example, when you stub your toe, its not just a nervous system response (mainly responsible for the cursing part of the response!) but a whole complex of systems including chemical ones kick into action to give the whole response.
The other problem the current raft of AI suffers from is a lack of deduction, a nice article in Scientific American ‘Artificial Intelligence You Can Probably Beat ChatGPT at These Math Brainteasers. Here’s Why’ 2025/05/25.
So yes, still have some way to go before a true AGI, but its still likely to happen. Whether it would be considered sentient is a different matter, especially if it doesn’t have those other biological factors that makes for life (as we know it!).