In 2025 we will see AI and machine learning leveraged to make real progress in understanding animal communication, answering a question that has puzzled humans as long as we have existed: 'What are animals saying to each other'' The recent Coller-Dolittle Prize, offering cash prizes up to half-a-million dollars for scientists who 'crack the code' is an indication of a bullish confidence that recent technological developments in machine learning and large language models (LLMs) are placing this goal within our grasp. Many research groups have been working for years on algorithms to make sense of animal sounds. Project Ceti, for example, has been decoding the click trains of sperm whales and the songs of humpbacks. These modern machine learning tools require extremely large amounts of data, and up until now, such quantities of high-quality and well-annotated data have been lacking. Consider LLMs such as ChatGPT that have training data available to them that includes the entirety of text available on the internet. Such information on animal communication hasn't been accessible in the past. It's not just that human data corpora are many orders of magnitude larger than the kind of data we have access to for animals in the wild: More than 500 GB of words were used to train GPT-3, compared to just more than 8,000 'codas' (or vocalizations) for Project Ceti's recent analysis of sperm whale communication....
Artificial intelligence is everywhere you look right now, making its way into music streaming, social media, video games, web search, and just about every other technological field. Every time a new phone or laptop is launched these days, what's invariably mentioned first is just how much AI it has on board. AI's reach also extends deeply into mobile photography. It started with the smart, algorithm-led tweaks to color and brightness in your mobile photos. Now we're all the way up to dropping people into photos who weren't actually there at the time'or, alternatively, erasing people and objects out of a shot. Both Android and iOS also apply machine algorithms to make colors in photos 'pop' and to add more dynamics to images. It doesn't have to be this way. You can still find mobile camera apps that shun AI and give the control back to you, so taking pictures is more about framing moments and scenes rather than any kind of AI fakery. These are two of the best. Zerocam proudly promotes its anti-AI ethos, describing itself as 'the simplest way to take photos,' with the idea being that it's as close to an actual point-and-shoot camera as possible. Natural, authentic looks are in'the app actually shoots in the RAW format'and artificial overprocessing is out....
Regina Barzilay, the School of Engineering Distinguished Professor for AI and Health within the Department of Electrical Engineering and Computer Science (EECS) at MIT, received the IEEE Frances E. Allen Medal for 'innovative machine learning algorithms that have led to advances in human language technology and demonstrated impact on the field of medicine.' Barzilay focuses on machine learning algorithms for modeling molecular properties in the context of drug design, with the goal of elucidating disease biochemistry and accelerating the development of new therapeutics. In the field of clinical AI, she focuses on algorithms for early cancer diagnostics. She is also the AI faculty lead within the MIT Abdul Latif Jameel Clinic for Machine Learning in Health and an affiliate of the Computer Science and Artificial Intelligence Laboratory, Institute for Medical Engineering and Science, and Koch Institute for Integrative Cancer Research. Barzilay is a member of the National Academy of Engineering, the National Academy of Medicine, and the American Academy of Arts and Sciences. She has earned the MacArthur Fellowship, MIT's Jamieson Award for excellence in teaching, and the Association for the Advancement of Artificial Intelligence's $1 million Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity. Barzilay is a fellow of AAAI, ACL, and AIMBE....
You might have heard that algorithms are in control of everything you hear, read, and see. They control the next song on your Spotify playlist, or what YouTube suggests you watch after you finish a video. Algorithms are perhaps why you can't escape Sabrina Carpenter's hit song 'Espresso' or why you might have suddenly been struck by the desire to buy one of those pastel-colored Stanley cups. They dictate how TV shows are made and which books get published'a revolutionary paradigm shift that's become fully entrenched in the arts and media, and isn't going away anytime soon. In 2024, culture is boring and stale due to the algorithms calling the shots on what gets produced and praised'or so the critics say. The New Yorker staff writer Kyle Chayka wrote an entire book about how Big Tech has successfully 'flattened culture' into a series of facsimile coffee shops and mid-century-modern furniture. The critic Jason Farago argued in The New York Times Magazine that 'the plunge through our screens' and 'our submission to algorithmic recommendation engines' have created a lack of momentum. Pinning the blame on new inventions isn't a fresh argument either: In a 1923 essay, Aldous Huxley pointed to the ease of cultural production, driven by a growing middle-class desire for entertainment, as a major culprit for why mass-market books, movies, and music were so unsatisfying. 'These effortless pleasures, these ready-made distractions that are the same for everyone over the face of the whole Western world,' he wrote, 'are surely a worse menace to our civilization than ever the Germans were.'...