Within 24 hours of centibillionaire Elon Musk using his X platform to upend a congressional funding bill and push the federal government to the brink of a shutdown, three GOP lawmakers are now calling for him to be named Speaker of the House. On Thursday, Senator Rand Paul, a Kentucky Republican, was the first to float the idea, in a post on Musk's own X platform. 'The Speaker of the House need not be a member of Congress,' Paul wrote. 'Nothing would disrupt the swamp more than electing Elon Musk.' Senator Mike Lee from Utah also endorsed Musk as Speaker, though he added that he would also be happy with Vivek Ramaswamy taking up the role, he told right-wing talk show host Benny Johnson, 'Let them choose one of them, I don't care which one, to be their Speaker,' Lee said. 'That would revolutionize everything, it would break up the firm.' Paul's suggestion was quickly picked up by another far-right elected official when Marjorie Taylor Greene, a representative from Georgia, wrote on X, 'I'd be open to supporting @elonmusk for Speaker of the House. DOGE can only truly be accomplished by reigning [sic] in Congress to enact real government efficiency. The establishment needs to be shattered just like it was yesterday. This could be the way.'...
The new model, called Gemini 2.0 Flash Thinking Experimental (a mouthful, to be sure), is available in AI Studio, Google's AI prototyping platform. A model card describes it as 'best for multimodal understanding, reasoning, and coding,' with the ability to 'reason over the most complex problems' in fields such as programming, math, and physics. In a post on X, Logan Kilpatrick, who leads product for AI Studio, called Gemini 2.0 Flash Thinking Experimental 'the first step in [Google's] reasoning journey.' Jeff Dean, chief scientist for Google DeepMind, Google's AI research division, said in his own post that Gemini 2.0 Flash Thinking Experimental is 'trained to use thoughts to strengthen its reasoning.' Built on Google's recently announced Gemini 2.0 Flash model, Gemini 2.0 Flash Thinking Experimental appears to be similar in design to OpenAI's o1 and other so-called reasoning models. Unlike most AI, reasoning models effectively fact-check themselves, which helps them avoid some of the pitfalls that normally trip up AI models....
Crafting a unique and promising research hypothesis is a fundamental skill for any scientist. It can also be time consuming: New PhD candidates might spend the first year of their program trying to decide exactly what to explore in their experiments. What if artificial intelligence could help' MIT researchers have created a way to autonomously generate and evaluate promising research hypotheses across fields, through human-AI collaboration. In a new paper, they describe how they used this framework to create evidence-driven hypotheses that align with unmet research needs in the field of biologically inspired materials. Published Wednesday in Advanced Materials, the study was co-authored by Alireza Ghafarollahi, a postdoc in the Laboratory for Atomistic and Molecular Mechanics (LAMM), and Markus Buehler, the Jerry McAfee Professor in Engineering in MIT's departments of Civil and Environmental Engineering and of Mechanical Engineering and director of LAMM. The framework, which the researchers call SciAgents, consists of multiple AI agents, each with specific capabilities and access to data, that leverage 'graph reasoning' methods, where AI models utilize a knowledge graph that organizes and defines relationships between diverse scientific concepts. The multi-agent approach mimics the way biological systems organize themselves as groups of elementary building blocks. Buehler notes that this 'divide and conquer' principle is a prominent paradigm in biology at many levels, from materials to swarms of insects to civilizations ' all examples where the total intelligence is much greater than the sum of individuals' abilities....
Last week, OpenAI launched Advanced Voice Mode with Vision, which feeds real-time video to ChatGPT, allowing the chatbot to 'see' beyond the confines of its app layer. The premise is that by giving ChatGPT greater contextual awareness, that bot can respond in a more natural and intuitive way. It's been nearly a year since OpenAI first demoed Advanced Voice Mode with Vision, which the company pitched as a step toward AI as depicted in the Spike Jonze movie 'Her.' The way OpenAI sold it, Advanced Voice Mode with Vision would grant ChatGPT superpowers ' enabling the bot to solve sketched-out math problems, read emotions, and respond to affectionate letters. At one point, curious to see if Advanced Voice Mode with Vision could help ChatGPT offer fashion pointers, I enabled it and asked ChatGPT to rate an outfit of mine. It happily did so. But while the bot would give opinions on my jeans and olive-colored-shirt combo, it consistently missed the brown jacket I was wearing. When OpenAI president Greg Brockman showed off Advanced Voice Mode with Vision on '60 Minutes' earlier this month, ChatGPT made a mistake on a geometry problem. When calculating the area of a triangle, it misidentified the triangle's height....