Despite the monumental leaps in artificial intelligence (AI) we’ve witnessed in recent years, the prospect of Artificial General Intelligence (AGI)—machines possessing the ability to understand, learn, and perform any intellectual task precisely as a human can—remains a far-off goal. Yes, advancements have been made with tools like GPT-4, Alpha Go, and Gato, contributing to the foundation of AGI. However, these are still considered early examples of AGI and not fully formed AGI systems.
The journey towards AGI is riddled with significant challenges and roadblocks. The task of developing AGI is compared to the mammoth task of understanding and replicating the complexity of the human brain, which is far from complete despite decades of research in neuroscience, psychology, and cognitive science.
And yet, the leap between GPT-3 and GPT-4 feels significant. It happened in a very short timeframe. We’ve also seen other models achieve equally impressive results, including some open-source AI model efforts. Still, as exciting and fast as things are, AGI is a different beast we are nowhere near solving.
One of the fundamental challenges we face is defining the scope of AGI. It’s crucial to establish the limits of what AGI machines can and cannot do to prevent negative consequences. Alongside this, we grapple with ensuring that these machines adhere to human ethics and morality, a nuanced and complex field in and of itself. Without this, AGI machines could pose real risks to society if they make decisions that do not align with human values, morals, or interests.
The issues extend beyond the technical and into the regulatory. Developing frameworks to govern the use of AGI, including data security standards, ethical behaviour guidelines, and regulations for developing AGI machines, is a challenge yet to be fully met. As AI systems become more intelligent and potentially more autonomous, the question of responsibility arises, particularly if an AGI system causes harm.
But the challenges aren’t just philosophical; they’re also computational. We need significant breakthroughs in machine learning, natural language processing, and computer vision to achieve AGI. We need new algorithms, techniques, and architectures that enable AGI to learn, reason, and adapt similarly to human intelligence. Furthermore, AGI requires access to higher computing resources than currently available, including more processing power, data storage, management, optimized computing architecture, and increased energy efficiency. The development of such resources is resource-intensive and could have significant environmental impacts.
The current roadblock that AI faces is the cost. OpenAI reportedly pays upwards of $700,000 USD daily to run ChatGPT. Can you imagine AI even more powerful, requiring more processing power and resources? You could theoretically be looking at tens of millions a day to run an AGI system at current resource prices.
Despite these hurdles, AGI holds immense potential and could surpass human intelligence and capabilities. It could lead to advancements in machine learning, neural networks, AI, natural language processing, and more. But we need to tread carefully, ensuring responsible development to benefit society and advance human progress.
We will eventually see some form of AGI, and as good as GPT-4 and other models are at making you believe it’s just around the corner, other problems must be solved before we get to that point. Some of these problems might be solved with the aid of AI itself.
In the meantime, enjoy the other possibilities we’ll witness due to AI, particularly scientific and medical breakthroughs. We are already seeing new drug combinations and potential cancer treatments being discovered thanks to AI. Another exciting area is new antibiotics research, where AI can once again reduce the time and failure rate of such research.