Posted by Alumni from TechCrunch
December 19, 2024
Last week, OpenAI launched Advanced Voice Mode with Vision, which feeds real-time video to ChatGPT, allowing the chatbot to 'see' beyond the confines of its app layer. The premise is that by giving ChatGPT greater contextual awareness, that bot can respond in a more natural and intuitive way. It's been nearly a year since OpenAI first demoed Advanced Voice Mode with Vision, which the company pitched as a step toward AI as depicted in the Spike Jonze movie 'Her.' The way OpenAI sold it, Advanced Voice Mode with Vision would grant ChatGPT superpowers ' enabling the bot to solve sketched-out math problems, read emotions, and respond to affectionate letters. At one point, curious to see if Advanced Voice Mode with Vision could help ChatGPT offer fashion pointers, I enabled it and asked ChatGPT to rate an outfit of mine. It happily did so. But while the bot would give opinions on my jeans and olive-colored-shirt combo, it consistently missed the brown jacket I was wearing. When OpenAI... learn more
It’s something else
It's definitely a strange new world
It's an issue?
3d
Yes
3d