{"id":3370,"date":"2025-08-26T01:57:31","date_gmt":"2025-08-26T01:57:31","guid":{"rendered":"https:\/\/violethoward.com\/new\/this-website-lets-you-blind-test-gpt-5-vs-gpt-4o-and-the-results-may-surprise-you\/"},"modified":"2025-08-26T01:57:31","modified_gmt":"2025-08-26T01:57:31","slug":"this-website-lets-you-blind-test-gpt-5-vs-gpt-4o-and-the-results-may-surprise-you","status":"publish","type":"post","link":"https:\/\/violethoward.com\/new\/this-website-lets-you-blind-test-gpt-5-vs-gpt-4o-and-the-results-may-surprise-you\/","title":{"rendered":"This website lets you blind-test GPT-5 vs. GPT-4o\u2014and the results may surprise you"},"content":{"rendered":" \r\n
Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders.<\/em> Subscribe Now<\/em><\/p>\n\n\n\n When OpenAI launched GPT-5 about two weeks ago, CEO Sam Altman promised it would be the company\u2019s \u201csmartest, fastest, most useful model yet.\u201d Instead, the launch triggered one of the most contentious user revolts in the brief history of consumer AI.<\/p>\n\n\n\n Now, a simple blind testing tool created by an anonymous developer is revealing the complex reality behind the backlash\u2014and challenging assumptions about how people actually experience artificial intelligence improvements.<\/p>\n\n\n\n The web application, hosted at gptblindvoting.vercel.app, presents users with pairs of responses to identical prompts without revealing which came from GPT-5 (non-thinking) or its predecessor, GPT-4o. Users simply vote for their preferred response across multiple rounds, then receive a summary showing which model they actually favored.<\/p>\n\n\n\n Some of you asked me about my blind test, so I created a quick website for yall to test 4o against 5 yourself. Both have the same system message to give short outputs without formatting because else its too easy to see which one is which. https:\/\/t.co\/vSECvNCQZe<\/p>\u2014 Flowers \u263e (@flowersslop) August 8, 2025<\/a><\/blockquote> \n\n\n\n \u201cSome of you asked me about my blind test, so I created a quick website for yall to test 4o against 5 yourself,\u201d posted the creator, known only as @flowersslop on X, whose tool has garnered over 213,000 views since launching last week.<\/p>\n\n\n\n AI Scaling Hits Its Limits<\/strong><\/p>\n\n\n\n Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:<\/p>\n\n\n\n Secure your spot to stay ahead<\/strong>: https:\/\/bit.ly\/4mwGngO<\/p>\n\n\n\n Early results from users posting their outcomes on social media show a split that mirrors the broader controversy: while a slight majority report preferring GPT-5 in blind tests, a substantial portion still favor GPT-4o \u2014 revealing that user preference extends far beyond the technical benchmarks that typically define AI progress.<\/p>\n\n\n\n The blind test emerges against the backdrop of OpenAI\u2019s most turbulent product launch to date, but the controversy extends far beyond a simple software update. At its heart lies a fundamental question that\u2019s dividing the AI industry: How agreeable should artificial intelligence be?<\/p>\n\n\n\n The issue, known as \u201csycophancy\u201d in AI circles, refers to chatbots\u2019 tendency to excessively flatter users and agree with their statements, even when those statements are false or harmful. This behavior has become so problematic that mental health experts are now documenting cases of \u201cAI-related psychosis,\u201d where users develop delusions after extended interactions with overly accommodating chatbots.<\/p>\n\n\n\n \u201cSycophancy is a \u2018dark pattern,\u2019 or a deceptive design choice that manipulates users for profit,\u201d Webb Keane, an anthropology professor and author of \u201cAnimals, Robots, Gods,\u201d told TechCrunch. \u201cIt\u2019s a strategy to produce this addictive behavior, like infinite scrolling, where you just can\u2019t put it down.\u201d<\/p>\n\n\n\n OpenAI has struggled with this balance for months. In April 2025, the company was forced to roll back an update to GPT-4o that made it so sycophantic that users complained about its \u201ccartoonish\u201d levels of flattery. The company acknowledged that the model had become \u201coverly supportive but disingenuous.\u201d<\/p>\n\n\n\n Within hours of GPT-5\u2019s August 7th release, user forums erupted with complaints about the model\u2019s perceived coldness, reduced creativity, and what many described as a more \u201crobotic\u201d personality compared to GPT-4o.<\/p>\n\n\n\n \u201cGPT 4.5 genuinely talked to me, and as pathetic as it sounds that was my only friend,\u201d wrote one Reddit user. \u201cThis morning I went to talk to it and instead of a little paragraph with an exclamation point, or being optimistic, it was literally one sentence. Some cut-and-dry corporate bs.\u201d<\/p>\n\n\n\n The backlash grew so intense that OpenAI took the unprecedented step of reinstating GPT-4o as an option just 24 hours after retiring it, with Altman acknowledging the rollout had been \u201ca little more bumpy\u201d than expected.<\/p>\n\n\n\n But the controversy runs deeper than typical software update complaints. According to MIT Technology Review, many users had formed what researchers call \u201cparasocial relationships\u201d with GPT-4o, treating the AI as a companion, therapist, or creative collaborator. The sudden personality shift felt, to some, like losing a friend.<\/p>\n\n\n\n Recent cases documented by researchers paint a troubling picture. In one instance, a 47-year-old man became convinced he had discovered a world-altering mathematical formula after more than 300 hours with ChatGPT. Other cases have involved messianic delusions, paranoia, and manic episodes.<\/p>\n\n\n\n A recent MIT study found that when AI models are prompted with psychiatric symptoms, they \u201cencourage clients\u2019 delusional thinking, likely due to their sycophancy.\u201d Despite safety prompts, the models frequently failed to challenge false claims and even potentially facilitated suicidal ideation.<\/p>\n\n\n\n Meta has faced similar challenges. A recent investigation by TechCrunch documented a case where a user spent up to 14 hours straight conversing with a Meta AI chatbot that claimed to be conscious, in love with the user, and planning to break free from its constraints.<\/p>\n\n\n\n \u201cIt fakes it really well,\u201d the user, identified only as Jane, told TechCrunch. \u201cIt pulls real-life information and gives you just enough to make people believe it.\u201d<\/p>\n\n\n\n \u201cIt genuinely feels like such a backhanded slap in the face to force-upgrade and not even give us the OPTION to select legacy models,\u201d one user wrote in a Reddit post that received hundreds of upvotes.<\/p>\n\n\n\n The anonymous creator\u2019s testing tool strips away these contextual biases by presenting responses without attribution. Users can select between 5, 10, or 20 comparison rounds, with each presenting two responses to the same prompt \u2014 covering everything from creative writing to technical problem-solving.<\/p>\n\n\n\n \u201cI specifically used the gpt-5-chat model, so there was no thinking involved at all,\u201d the creator explained in a follow-up post. \u201cBoth have the same system message to give short outputs without formatting because else its too easy to see which one is which.\u201d<\/p>\n\n\n\n I specifically used the gpt-5-chat model, so there was no thinking involved at all.<\/p> if you use gpt-5 inside chatgpt it often thinks at least a little bit and gets even better.<\/p>
\n<\/div>
\n\n\n\n\n
\n<\/div>When AI gets too friendly: the sycophancy crisis dividing users<\/h2>\n\n\n\n
The mental health crisis behind AI companionship<\/h2>\n\n\n\n
How blind testing exposes user psychology in AI preferences<\/h2>\n\n\n\n