Invite your Peers
And receive 1 week of complimentary premium membership
Upcoming Events (0)
ORGANIZE A MEETING OR EVENT
And earn up to €300 per participant.
Sub Circles (0)
No sub circles for Algorithms & Data structures
Research Topics (0)
No research topics
Exclusive: Earth AI's algorithms found critical minerals in places everyone else ignored | TechCrunch
Now, another startup, Earth AI, exclusively told TechCrunch about its own discovery: promising deposits of critical minerals in parts of Australia that other mining outfits had ignored for decades. While it's still not known whether they are as large as KoBold's, the news suggests that future supplies of critical minerals are likely to emerge from a combination of field data parsed by artificial intelligence. Earth AI has identified deposits of copper, cobalt, and gold in the Northern Territory and silver, molybdenum, and tin at another site in New South Wales, 310 miles (500 kilometers) northwest of Sydney. Earth AI emerged from Teslyuk's graduate studies. Teslyuk, a native of Ukraine, was working toward a doctorate at the University of Sydney, where he became familiar with the mining industry in Australia. There, the government owns the rights to mineral deposits, and it leases them in six-year terms. Since the 1970s, he said, exploration companies are required to submit their data to a national archive....
Mark shared this article 3d
What are AI hallucinations' Why AIs sometimes make things up
When an algorithmic system generates information that seems plausible but is actually inaccurate or misleading, computer scientists call it an AI hallucination. Researchers have found these behaviors in different types of AI systems, from chatbots such as ChatGPT to image generators such as Dall-E to autonomous vehicles. We are information science researchers who have studied hallucinations in AI speech recognition systems. Wherever AI systems are used in daily life, their hallucinations can pose risks. Some may be minor ' when a chatbot gives the wrong answer to a simple question, the user may end up ill-informed. But in other cases, the stakes are much higher. From courtrooms where AI software is used to make sentencing decisions to health insurance companies that use algorithms to determine a patient's eligibility for coverage, AI hallucinations can have life-altering consequences. They can even be life-threatening: Autonomous vehicles use AI to detect obstacles, other vehicles and pedestrians....
Mark shared this article 6d
Tiny satellite sets new record for secure quantum communication
A tiny satellite has enabled quantum-encrypted information to be sent between China and South Africa, the farthest distance yet achieved for quantum communication. Using a laser-based system, a team in the city of Hefei was able to beam a 'secret key' encoded in quantum states of photons, to their colleagues over 12,000 km away. This key allowed scrambled messages to be decrypted ' including one containing a picture of the Great Wall of China. The team's system is drastically smaller and cheaper that previous attempts, and they think it represents a big step towards the creation of a global network of secure, quantum communication. Researchers have created an AI system called TextGrad which can provide written feedback on another AI's performance. This feedback is interpretable by humans, which could help researchers tweak the incredibly complicated, and sometimes inscrutable models that underpin modern AIs. 'Previously optimising machine learning algorithms requires quite a lot of human engineering,' says James Zou, one of the team behind this work, 'but with TextGrad, now the AI is able to self-improve to a large extent.'...
Mark shared this article 10d
FTC Removes Posts Critical of Amazon, Microsoft, and AI Companies
The Trump administration's Federal Trade Commission has removed four years worth of business guidance blogs as of Tuesday morning, including important consumer protection information related to artificial intelligence and the agency's landmark privacy lawsuits under former chair Lina Khan against companies like Amazon and Microsoft. More than 300 blogs were removed. On the FTC's website, the page hosting all of the agency's business-related blogs and guidance no longer includes any information published during former president Joe Biden's administration, current and former FTC employees, who spoke under anonymity for fear of retaliation, tell WIRED. These blogs contained advice from the FTC on how big tech companies could avoid violating consumer protection laws. One now deleted blog, titled 'Hey, Alexa! What are you doing with my data'' explains how, according to two FTC complaints, Amazon and its Ring security camera products allegedly leveraged sensitive consumer data to train the ecommerce giant's algorithms. (Amazon disagreed with the FTC's claims.) It also provided guidance for companies operating similar products and services. Another post titled '$20 million FTC settlement addresses Microsoft Xbox illegal collection of kids' data: A game changer for COPPA compliance' instructs tech companies on how to abide by Children's Online Privacy Protection Act by using the 2023 Microsoft settlement as an example. The settlement followed allegations by the FTC that Microsoft obtained data from children using Xbox systems without the consent of their parents or guardians....
Mark shared this article 11d
WE USE COOKIES TO ENHANCE YOUR EXPERIENCE
Unicircles uses cookies to personalize content, provide certain advanced features, and to analyze traffic. Per our privacy policy, we WILL NOT share information about your use of our site with social media, advertising, or analytics companies. If you continue using Unicircles by clicking below link, you agree to our use of Cookies while using Unicircles.
I AGREELearn more
x