Invite your Peers
And receive 1 week of complimentary premium membership
Upcoming Events (0)
ORGANIZE A MEETING OR EVENT
And earn up to €300 per participant.
Sub Circles (0)
No sub circles for Deepfake
When to Use GenAI Versus Predictive AI | Rama Ramakrishnan
Leaders are often confused about when to use generative AI versus predictive AI (machine learning and deep learning) tools. The issue isn't that one technology is superior: It's about matching the technology to the specific business problem. This column presents a pragmatic way to help you make the best decision and avoid costly mistakes. The analytics landscape has evolved significantly during the past decade. Many organizations have progressed from basic statistical modeling to machine learning, and some have added deep learning to their toolkits as well. In this context, the emergence of generative AI ' with its ability to create humanlike text, generate images, and write code ' introduces new possibilities and new questions. While generative AI promises to revolutionize everything from customer service to product development, its optimal role alongside predictive AI tools (that is, machine learning and deep learning tools) remains a work in progress. That often leaves leaders asking what the right approach is for addressing a particular problem. This article presents a set of guidelines to help leaders and organizations navigate this tricky and crucial decision....
Mark shared this article 7d
How COOs maximize operational impact from gen AI and agentic AI
Better, faster, easier, cheaper: That's the promise of gen AI. For at least some companies, it's becoming the reality as well, as leaders find new ways for gen AI'and the increasingly capable agents it enables'to automate, augment, and accelerate work across virtually every function. Early adopters are using gen AI to help strengthen supplier negotiations in procurement and improve quality control in equipment maintenance (see sidebar 'Gen AI's potential across operations'). One digital marketing platform is even using gen AI to manage 'long tail' sales accounts that were previously too labor-intensive to serve, for an annual revenue gain of more than $30 million. McKinsey estimates that over the long term, gen AI could yield $4.4 trillion in productivity growth potential.1Hannah Mayer, Lareina Yee, Michael Chui, and Roger Roberts, Superagency in the workplace: Empowering people to unlock AI's full potential, McKinsey, January 28, 2025; The economic potential of generative AI: The next productivity frontier, McKinsey, June 14, 2023. That's on top of 'traditional' or 'analytical' AI, which relies on structured data to solve discrete analytic tasks'such as predictive analytics for optimizing equipment maintenance. Gen AI's deep learning AI models are already helping companies achieve performance breakthroughs across the operations value chain, especially by finding new opportunities to break internal silos. Multiagent systems can achieve even more (exhibit)....
Mark shared this article 12d
Who Owns Your Face' The Legal Fight for Identity In The Age of AI
In an era where artificial intelligence can generate hyper-realistic deepfakes, companies monetize biometric data, and athletes fight for their rights under name, image and likeness, or NIL, contracts, a fundamental question emerges: Do we truly own our own faces' While intellectual property laws, privacy regulations and NIL agreements attempt to address these issues, they often lag behind innovation, leaving individuals vulnerable to exploitation. The intersection of AI, NIL and biometric data collection raises profound concerns about whether existing legal frameworks adequately protect personal property rights while fostering innovation. It's a given today that deepfake technology has progressed to the point where AI-generated images, videos and audio can be nearly indistinguishable from reality. This advancement raises serious concerns about ownership and consent. If an AI-generated deepfake replicates a person's likeness without their permission, do they have legal recourse' The answer depends largely on jurisdiction and existing legal frameworks. Some U.S. states have enacted laws criminalizing certain uses of deepfakes, particularly in cases of nonconsensual pornography or election interference....
Mark shared this article 12d
When humans use AI to earn patents, who is doing the inventing'
The advent of generative artificial intelligence has sent shock waves across industries, from the technical to the creative. AI systems that can generate viable computer code, write news stories and spin up professional-looking graphics have inspired countless headlines asking whether they will take away jobs in technology, journalism and design, among many other fields. Among technologists who build digital tools or programs, it is increasingly common to use AI as part of design and development processes. But as deep learning models flex their technical muscles more and more, even highly skilled researchers who are using AI in their work have begun to express concerns about becoming obsolete. There is much debate about whether AI can augment human creativity, but emerging data suggests that the technology can boost research and development where creativity typically plays an important role. A recent study by MIT economics doctoral student Aidan Toner-Rodgers found that scientists using AI tools increased their patent filings by 39% and created 17% more prototypes than when they worked without such tools....
Mark shared this article 18d
Why OpenAI isn't bringing deep research to its API just yet | TechCrunch
OpenAI says that it won't bring the AI model powering deep research, its in-depth research tool, to its developer API while it figures out how to better assess the risks of AI convincing people to act on or change their beliefs. In an OpenAI whitepaper published Wednesday, the company wrote that it's in the process of revising its methods for probing models for 'real-world persuasion risks,' like distributing misleading info at scale. OpenAI noted that it doesn't believe the deep research model is a good fit for mass misinformation or disinformation campaigns, owing to its high computing costs and relatively slow speed. Nevertheless, the company said it intends to explore factors like how AI could personalize potentially harmful persuasive content before bringing the deep research model to its API. There's a real fear that AI is contributing to the spread of false or misleading information meant to sway hearts and minds toward malicious ends. For example, last year, political deepfakes spread like wildfire around the globe. On election day in Taiwan, a Chinese Communist Party-affiliated group posted AI-generated, misleading audio of a politician throwing his support behind a pro-China candidate....
Mark shared this article 1m
Can deep learning transform heart failure prevention'
Posted by Mark Field from MIT in Medicine and Deepfake
The ancient Greek philosopher and polymath Aristotle once concluded that the human heart is tri-chambered and that it was the single most important organ in the entire body, governing motion, sensation, and thought. Today, we know that the human heart actually has four chambers and that the brain largely controls motion, sensation, and thought. But Aristotle was correct in observing that the heart is a vital organ, pumping blood to the rest of the body to reach other vital organs. When a life-threatening condition like heart failure strikes, the heart gradually loses the ability to supply other organs with enough blood and nutrients that enables them to function. Researchers from MIT and Harvard Medical School recently published an open-access paper in Nature Communications Medicine, introducing a noninvasive deep learning approach that analyzes electrocardiogram (ECG) signals to accurately predict a patient's risk of developing heart failure. In a clinical trial, the model showed results with accuracy comparable to gold-standard but more-invasive procedures, giving hope to those at risk of heart failure. The condition has recently seen a sharp increase in mortality, particularly among young adults, likely due to the growing prevalence of obesity and diabetes....
Mark shared this article 2mths
Creating a common language
'When you are in your PhD stage, there is a high wall between different disciplines and subjects, and there was even a high wall within computer science,' He says. 'The guy sitting next to me could be doing things that I completely couldn't understand.' In the seven months since he joined the MIT Schwarzman College of Computing as the Douglas Ross (1954) Career Development Professor of Software Technology in the Department of Electrical Engineering and Computer Science, He says he is experiencing something that in his opinion is 'very rare in human scientific history' ' a lowering of the walls that expands across different scientific disciplines. 'There is no way I could ever understand high-energy physics, chemistry, or the frontier of biology research, but now we are seeing something that can help us to break these walls,' He says, 'and that is the creation of a common language that has been found in AI.' According to He, this shift began in 2012 in the wake of the 'deep learning revolution,' a point when it was realized that this set of machine-learning methods based on neural networks was so powerful that it could be put to greater use....
Mark shared this article 2mths
User-friendly system can help developers build more efficient simulations and AI models
The neural network artificial intelligence models used in applications like medical image processing and speech recognition perform operations on hugely complex data structures that require an enormous amount of computation to process. This is one reason deep-learning models consume so much energy. To improve the efficiency of AI models, MIT researchers created an automated system that enables developers of deep learning algorithms to simultaneously take advantage of two types of data redundancy. This reduces the amount of computation, bandwidth, and memory storage needed for machine learning operations. Existing techniques for optimizing algorithms can be cumbersome and typically only allow developers to capitalize on either sparsity or symmetry ' two different types of redundancy that exist in deep learning data structures. By enabling a developer to build an algorithm from scratch that takes advantage of both redundancies at once, the MIT researchers' approach boosted the speed of computations by nearly 30 times in some experiments....
Mark shared this article 2mths
WE USE COOKIES TO ENHANCE YOUR EXPERIENCE
Unicircles uses cookies to personalize content, provide certain advanced features, and to analyze traffic. Per our privacy policy, we WILL NOT share information about your use of our site with social media, advertising, or analytics companies. If you continue using Unicircles by clicking below link, you agree to our use of Cookies while using Unicircles.
I AGREELearn more
x