Last month, AI founders and investors told TechCrunch that we're now in the 'second era of scaling laws,' noting how established methods of improving AI models were showing diminishing returns. One promising new method they suggested could keep gains was 'test-time scaling,' which seems to be what's behind the performance of OpenAI's o3 model ' but it comes with drawbacks of its own. Much of the AI world took the announcement of OpenAI's o3 model as proof that AI scaling progress has not 'hit a wall.' The o3 model does well on benchmarks, significantly outscoring all other models on a test of general ability called ARC-AGI, and scoring 25% on a difficult math test that no other AI model scored more than 2% on. Of course, we at TechCrunch are taking all this with a grain of salt until we can test o3 for ourselves (very few have tried it so far). But even before o3's release, the AI world is already convinced that something big has shifted. The co-creator of OpenAI's o-series of...
learn more