By OpenAI's own testing,amatures having sex with young mothers on train video its newest reasoning models, o3 and o4-mini, hallucinate significantly higher than o1.
First reported by TechCrunch, OpenAI's system card detailed the PersonQA evaluation results, designed to test for hallucinations. From the results of this evaluation, o3's hallucination rate is 33 percent, and o4-mini's hallucination rate is 48 percent — almost half of the time. By comparison, o1's hallucination rate is 16 percent, meaning o3 hallucinated about twice as often.
SEE ALSO: All the AI news of the week: ChatGPT debuts o3 and o4-mini, Gemini talks to dolphinsThe system card noted how o3 "tends to make more claims overall, leading to more accurate claims as well as more inaccurate/hallucinated claims." But OpenAI doesn't know the underlying cause, simply saying, "More research is needed to understand the cause of this result."
OpenAI's reasoning models are billed as more accurate than its non-reasoning models like GPT-4o and GPT-4.5 because they use more computation to "spend more time thinking before they respond," as described in the o1 announcement. Rather than largely relying on stochastic methods to provide an answer, the o-series models are trained to "refine their thinking process, try different strategies, and recognize their mistakes."
However, the system card for GPT-4.5, which was released in February, shows a 19 percent hallucination rate on the PersonQA evaluation. The same card also compares it to GPT-4o, which had a 30 percent hallucination rate.
In a statement to Mashable, an OpenAI spokesperson said, “Addressing hallucinations across all our models is an ongoing area of research, and we’re continually working to improve their accuracy and reliability.”
Evaluation benchmarks are tricky. They can be subjective, especially if developed in-house, and research has found flaws in their datasets and even how they evaluate models.
Plus, some rely on different benchmarks and methods to test accuracy and hallucinations. HuggingFace's hallucination benchmark evaluates models on the "occurrence of hallucinations in generated summaries" from around 1,000 public documents and found much lower hallucination rates across the board for major models on the market than OpenAI's evaluations. GPT-4o scored 1.5 percent, GPT-4.5 preview 1.2 percent, and o3-mini-high with reasoning scored 0.8 percent. It's worth noting o3 and o4-mini weren't included in the current leaderboard.
That's all to say; even industry standard benchmarks make it difficult to assess hallucination rates.
Then there's the added complexity that models tend to be more accurate when tapping into web search to source their answers. But in order to use ChatGPT search, OpenAI shares data with third-party search providers, and Enterprise customers using OpenAI models internally might not be willing to expose their prompts to that.
Regardless, if OpenAI is saying their brand-new o3 and o4-mini models hallucinate higher than their non-reasoning models, that might be a problem for its users.
UPDATE: Apr. 21, 2025, 1:16 p.m. EDT This story has been updated with a statement from OpenAI.
Futurists predict what your sex life may look like after the pandemicJohn Krasinski’s 'Some Good News' was more than just a show — it was a life raftGrimes explained the meaning behind her baby's name, but we're still confusedHow coronavirus time distortion will change us foreverThe Ghost of the FeastThe unexpected joy of not knowing when your package will be delivered53 fun holidays to celebrate if you need something to look forward to this summerYou can now meet and adopt a dog on ZoomWe got real 'Simpsons' animators to make Zoom backgrounds. They're awesome.We put up a billboard inviting the world to our housemate's Zoom birthday party That lamb ad sure is diverse, but there's a few things not right about it Sorry, Trump, everyone on Twitter still cares about your tax returns Uber adds calendar shortcuts to set destinations faster Snowboarder backflips off a moving car like it's NBD Indian soldier rants about bad food, being forced to sleep on an empty stomach, video goes viral Apple's CareKit apps get enhanced security option Donald Trump's long history of troubling statements about vaccines and autism Amazon promises to create 100,000 jobs, and Trump team quickly takes credit Amazon launches a credit card just for Prime members Now Putin is causing drama for Uber drivers in Russia
0.2992s , 12145.71875 kb
Copyright © 2025 Powered by 【amatures having sex with young mothers on train video】OpenAI's o3 and o4,Global Hot Topic Analysis