With the arrival of AGI in 2025, Sam Altman argues, machines will be able to “think like humans.

With the arrival of AGI in 2025, Sam Altman argues, machines will be able to “think like humans.

Society has barely adapted to artificial intelligence, but OpenAI CEO Sam Altman says things are about to step up a notch with the arrival of Artificial General Intelligence (AGI) as early as next year.

AGI is a form of AI that is as capable, if not more capable, than all humans in nearly every area of intelligence; AGI is the “Holy Grail” for every major AI lab, and many predicted it would take a decade or more to reach it.

In an interview with Y Combinator, Altman claimed that AGI could be achieved in 2025, declaring that it was now simply a matter of engineering. He stated that things are moving faster than expected and that the path to AGI is “basically clear.”

Not everyone agrees, and the definition of AGI is still very much undefined. Altman also discussed the path to artificial superintelligence (ASI), stating that even ASI, where AI can unlock the secrets of the universe, is “thousands of days away.”

There is no strict definition of artificial intelligence. Type “what is AGI” into Google and you will get an overview of AI. The general definition above is that it is as capable as humans in all areas, but that is not the only approach.

By some definitions, an AGI must also be able to go beyond mere “knowledge” to learn, adapt, and perform tasks similar to human intelligence. To do so, it must go beyond training data and create outputs that are not based on human input.

A new benchmark, FrontierMath, found that several models hit a wall with respect to inference; GPT-4o and Gemini 1.5 Pro were able to solve less than 2% of the problems in the benchmark.

In other words, if the ability to move beyond training data must be considered as a criterion for AGI, then the current models are still a long way off. Nevertheless, according to OpenAI officials, the full version of o1 is considerably more advanced than the preview version in terms of inference, and there are rumors that the next generation Gemini model will also perform better on math problems.

The “big” versions of Google and Anthropic's main models have yet to appear; Dario Amodei, CEO of Anthropic, recently confirmed that Claude 3.5 Opus is “still coming” and will reach AGI by 2026/2027 He predicts that it will reach AGI by 2026/2027.

OpenAI has a vested interest in declaring that it has reached AGI; once AGI is achieved, OpenAI's contract with Microsoft will end and MSFT will be forced to enter into a new contract, and to use OpenAI's models in co-pilots, for which it will have to pay more amount of money could be paid. This is what the New York Times reported about the “frayed edges” between the tech companies.

The AI Lab defines AGI as “AI systems that are generally smarter than humans,” explaining that AGI is at Level 5 and that AGI reaches five AI levels.

Level 1 is the chatbot, a system we have been using for two years, a simple text generator that can simulate human conversation. Level 2 is the reasoner, and such systems are emerging through models like OpenAI's o1.

And happening almost simultaneously is Level 3, where “agents” that can perform tasks on their own emerge; Google's rumored Jarvis and Claude with Computer Use are examples of very early agent-like systems.

The last two levels are a big step up, but Altman says models like o1 will help build the next generation. For example, Level 4 is the innovator who can help invent and provide new ideas not available to humans. This is where the FrontierMath benchmark comes in.

Finally, according to OpenAI, AGI is reached when an AI model can do the work of an entire organization. This is the point at which the model is smart enough to reason, perform tasks, generate new ideas, and execute them alone.

In reality, AGI will develop gradually. It will not be a bolt from the sky that changes everything in one shot, but will be slowly improved over time, much like the advent of generative AI, until it becomes embedded in and a part of everything we do.

Let's just hope that the people building it are more aware of the potential impact than Miles Dyson was when he created Skynet in the world of The Terminator.

.

Categories