As we are approaching the holidays it seems that every major AI lab decided to release their latest models. Without a doubt, last week has to be one of the most impressive weeks in the history of generative AI in terms of model releases with Microsoft, OpenAI , Google, Cohere and others shipping new models. Sora: One of the most anticipated releases of OpenAI, Sora is a groundbreaking video generation model that brings text-to-video capabilities to the forefront. Sora allows users to create realistic videos from text prompts, extending, remixing, and blending existing assets or generating entirely new content. It features a new interface with a storyboard tool for precise input specification, alongside Featured and Recent feeds showcasing community creations. OpenAI acknowledges the limitations of this early version, particularly in generating realistic physics and handling complex actions over extended durations. They emphasize their commitment to responsible deployment, highlighting efforts to ensure transparency, mitigate deepfakes, and prevent misuse....
AI is already upending the economic covenant of the internet that's existed since the advent of search: A few companies (mostly Google) bring demand, and creators bring supply (and get some ad revenue or recognition from it). AI tools are already generating and summarizing content, obviating the need for users to click through to the sites of content providers, and thereby upsetting the balance. Meanwhile, an ocean of AI-powered deepfakes and bots will make us question what's real and will degrade people's trust in the online world. And as big tech companies'who can afford the most data and compute'continue to invest in AI, they will become even more powerful, further closing off what remains of the open internet. The march of technology is inevitable. I'm not calling attention to this to cry that the sky is falling or to hold back progress. We need to help individual users gain some control of their digital lives. Thoughtful government regulation could help, but it often slows innovation. Attempting a one-size-fits-all solution can create as many problems as it solves. And, let's face it, users are not going to retreat from living their lives online....
Jaime Teevan joined Microsoft before it was cool again. In 2006, she was completing her doctorate in artificial intelligence at MIT. She had many options but was drawn to the company's respected, somewhat ivory-tower-ish research division. Teevan remained at Microsoft while the mother ship blundered its way through the mobile era. Then, as the calendar flipped into the 2010s, an earth-shattering tech advance emerged. A method of artificial intelligence called deep learning was proving to be a powerful enhancement to software products. Google, Facebook, and others went on a tear to hire machine-learning researchers. Not so much Microsoft. 'I don't remember it like a frenzy,' Teevan says. 'I don't remember drama.' That was a problem. Microsoft's focus remained largely on milking its cash cows, Windows and Office. In 2014, Microsoft surprised people by promoting the ultimate company man, Satya Nadella, to CEO. Nadella had spent 22 years pulling himself up the ranks with his smarts and drive. And his likability. The latter trait was a rarity at the company. Nadella knew its culture intimately, and he knew he had to change it....
For roboticists, one challenge towers above all others: generalization ' the ability to create machines that can adapt to any environment or condition. Since the 1970s, the field has evolved from writing sophisticated programs to using deep learning, teaching robots to learn directly from human behavior. But a critical bottleneck remains: data quality. To improve, robots need to encounter scenarios that push the boundaries of their capabilities, operating at the edge of their mastery. This process traditionally requires human oversight, with operators carefully challenging robots to expand their abilities. As robots become more sophisticated, this hands-on approach hits a scaling problem: the demand for high-quality training data far outpaces humans' ability to provide it. Now, a team of MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers has developed a novel approach to robot training that could significantly accelerate the deployment of adaptable, intelligent machines in real-world environments. The new system, called 'LucidSim,' uses recent advances in generative AI and physics simulators to create diverse and realistic virtual training environments, helping robots achieve expert-level performance in difficult tasks without any real-world data....