
OpenAI says that it won't bring the AI model powering deep research, its in-depth research tool, to its developer API while it figures out how to better assess the risks of AI convincing people to act on or change their beliefs. In an OpenAI whitepaper published Wednesday, the company wrote that it's in the process of revising its methods for probing models for 'real-world persuasion risks,' like distributing misleading info at scale. OpenAI noted that it doesn't believe the deep research model is a good fit for mass misinformation or disinformation campaigns, owing to its high computing costs and relatively slow speed. Nevertheless, the company said it intends to explore factors like how AI could personalize potentially harmful persuasive content before bringing the deep research model to its API. There's a real fear that AI is contributing to the spread of false or misleading information meant to sway hearts and minds toward malicious ends. For example, last year, political...
learn more