OpenAI today announced an improved version of its most capable artificial intelligence model to date'one that takes even more time to deliberate over questions'just a day after Google announced its first model of this type. OpenAI's new model, called o3, replaces o1, which the company introduced in September. Like o1, the new model spends time ruminating over a problem in order to deliver better answers to questions that require step-by-step logical reasoning. (OpenAI chose to skip the 'o2' moniker because it's already the name of a mobile carrier in the UK.) 'We view this as the beginning of the next phase of AI,' said OpenAI CEO Sam Altman on a livestream Friday. 'Where you can use these models to do increasingly complex tasks that require a lot of reasoning.' The o3 model scores much higher on several measures than its predecessor, OpenAI says, including ones that measure complex coding-related skills and advanced math and science competency. It is three times better than o1 at answering questions posed by ARC-AGI, a benchmark designed to test an AI models' ability to reason over extremely difficult mathematical and logic problems they're encountering for the first time....
Conneau spends a good chunk of time thinking about how to avoid the dystopia shown in that movie, he told TechCrunch in an interview. 'Her' was a science fiction film about a world where people develop intimate relationships with AI systems, instead of other humans. 'The movie is a dystopia, right' It's not a future we want,' said Conneau. 'We want to bring that technology ' which now exists and will exist ' and we want to bring it for good. We want to do precisely the opposite of what the company in that movie does.' Building the tech, minus the dystopia that comes with it, seems like a contradiction. But Conneau intends to build it anyway, and he's convinced his new AI startup will help people 'feel the AGI' with their ears. On Monday, Conneau launched WaveForms AI, a new audio LLM company training its own foundation models. It's aiming to release AI audio products in 2025 that compete with offerings from OpenAI and Google. The startup raised $40 million in seed funding, it announced on Monday, led by Andreessen Horowitz....
A well-known test for artificial general intelligence (AGI) is getting close to being solved, but the test's creators say this points to flaws in the test's design rather than a bonafide breakthrough in research. In 2019, Francois Chollet, a leading figure in the AI world, introduced the ARC-AGI benchmark, short for 'Abstract and Reasoning Corpus for Artificial General Intelligence.' Designed to evaluate whether an AI system can efficiently acquire new skills outside the data it was trained on, ARC-AGI, Francois claims, remains the only AI test to measure progress towards general intelligence (although others have been proposed.) Until this year, the best-performing AI could only solve just under a third of the tasks in ARC-AGI. Chollet blamed the industry's focus on large language models (LLMs), which he believes aren't capable of actual 'reasoning.' To Chollet's point, LLMs are statistical machines. Trained on a lot of examples, they learn patterns in those examples to make predictions ' like how 'to whom' in an email typically precedes 'it may concern.'...
OpenAI's latest artificial intelligence (AI) system dropped in September with a bold promise. The company behind the chatbot ChatGPT showcased o1 ' its latest suite of large language models (LLMs) ' as having a 'new level of AI capability'. OpenAI, which is based in San Francisco, California, claims that o1 works in a way that is closer to how a person thinks than do previous LLMs. The release poured fresh fuel on a debate that's been simmering for decades: just how long will it be until a machine is capable of the whole range of cognitive tasks that human brains can handle, including generalizing from one task to another, abstract reasoning, planning and choosing which aspects of the world to investigate and learn from' Such an 'artificial general intelligence', or AGI, could tackle thorny problems, including climate change, pandemics and cures for cancer, Alzheimer's and other diseases. But such huge power would also bring uncertainty ' and pose risks to humanity. 'Bad things could happen because of either the misuse of AI or because we lose control of it,' says Yoshua Bengio, a deep-learning researcher at the University of Montreal, Canada....