Invite your colleagues
And receive 1 week of complimentary premium membership
Upcoming Events (0)
ORGANIZE A MEETING OR EVENT
And earn up to €300 per participant.
Sub Circles (0)
No sub circles for Global Risks
Research Topics (0)
No research topics
Ecologists find computer vision models' blind spots in retrieving wildlife images
Try taking a picture of each of North America's roughly 11,000 tree species, and you'll have a mere fraction of the millions of photos within nature image datasets. These massive collections of snapshots ' ranging from butterflies to humpback whales ' are a great research tool for ecologists because they provide evidence of organisms' unique behaviors, rare conditions, migration patterns, and responses to pollution and other forms of climate change. While comprehensive, nature image datasets aren't yet as useful as they could be. It's time-consuming to search these databases and retrieve the images most relevant to your hypothesis. You'd be better off with an automated research assistant ' or perhaps artificial intelligence systems called multimodal vision language models (VLMs). They're trained on both text and images, making it easier for them to pinpoint finer details, like the specific trees in the background of a photo. But just how well can VLMs assist nature researchers with image retrieval' A team from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), University College London, iNaturalist, and elsewhere designed a performance test to find out. Each VLM's task: locate and reorganize the most relevant results within the team's 'INQUIRE' dataset, composed of 5 million wildlife pictures and 250 search prompts from ecologists and other biodiversity experts. Looking for that special frog...
Mark shared this article 7hrs
Music Can Thrive in the AI Era
The birth of ChatGPT brought a collection of anxieties regarding how large language models allow users to quickly subvert processes that once required human time, effort, passion, and understanding. And further, the tech sector's often stormy relationship with regulation and ethical oversight have left many fearful for a future where artificial intelligence replaces humans at work and stymies human creativity. While much of this alarm is well founded, we should also consider the possibility that human creativity can blossom in the age of AI. In 2025, we will start to see this manifest in our collective cultural response to technology. To examine how culture and creativity might adapt to the age of AI, we'll use hip-hop as an example. It's one of the most lucrative forms of music ever invented, and one that has already been influenced by large language models. We've all heard the AI-driven rap songs by popular artists and seen them go viral, easily mistaken for authentic, original music. For example, during the recent rap feud between Drake and Kendrick Lamar, an AI-generated song called 'One Shot' was released, and was incorrectly attributed to Lamar. In 2025 we should expect more AI-generated fake music, especially fueled by the social media circus where being loudest and most provocative can draw the immediate attention of millions....
Mark shared this article 8hrs
Language AIs in 2024: Size, guardrails and steps toward AI agents
I research the intersection of artificial intelligence, natural language processing and human reasoning as the director of the Advancing Human and Machine Reasoning lab at the University of South Florida. I am also commercializing this research in an AI startup that provides a vulnerability scanner for language models. At the heart of commercially available generative AI products like ChatGPT are large language models, or LLMs, which are trained on vast amounts of text and produce convincing humanlike language. Their size is generally measured in parameters, which are the numerical values a model derives from its training data. The larger models like those from the major AI companies have hundreds of billions of parameters. First, organizations with the most computational resources experiment with and train increasingly larger and more powerful language models. Those yield new large language model capabilities, benchmarks, training sets and training or prompting tricks. In turn, those are used to make smaller language models ' in the range of 3 billion parameters or less ' which can be run on more affordable computer setups, require less energy and memory to train, and can be fine-tuned with less data....
Mark shared this article 8hrs
OpenAI Upgrades Its Smartest AI Model With Improved Reasoning Skills
OpenAI today announced an improved version of its most capable artificial intelligence model to date'one that takes even more time to deliberate over questions'just a day after Google announced its first model of this type. OpenAI's new model, called o3, replaces o1, which the company introduced in September. Like o1, the new model spends time ruminating over a problem in order to deliver better answers to questions that require step-by-step logical reasoning. (OpenAI chose to skip the 'o2' moniker because it's already the name of a mobile carrier in the UK.) 'We view this as the beginning of the next phase of AI,' said OpenAI CEO Sam Altman on a livestream Friday. 'Where you can use these models to do increasingly complex tasks that require a lot of reasoning.' The o3 model scores much higher on several measures than its predecessor, OpenAI says, including ones that measure complex coding-related skills and advanced math and science competency. It is three times better than o1 at answering questions posed by ARC-AGI, a benchmark designed to test an AI models' ability to reason over extremely difficult mathematical and logic problems they're encountering for the first time....
Mark shared this article 8hrs