{"id":3067,"date":"2025-08-09T06:02:35","date_gmt":"2025-08-09T06:02:35","guid":{"rendered":"https:\/\/violethoward.com\/new\/openai-returns-old-models-to-chatgpt-amid-bumpy-gpt-5-rollout\/"},"modified":"2025-08-09T06:02:35","modified_gmt":"2025-08-09T06:02:35","slug":"openai-returns-old-models-to-chatgpt-amid-bumpy-gpt-5-rollout","status":"publish","type":"post","link":"https:\/\/violethoward.com\/new\/openai-returns-old-models-to-chatgpt-amid-bumpy-gpt-5-rollout\/","title":{"rendered":"OpenAI returns old models to ChatGPT amid \u2018bumpy\u2019 GPT-5 rollout"},"content":{"rendered":" \r\n<br><div>\n\t\t\t\t<div id=\"boilerplate_2682874\" class=\"post-boilerplate boilerplate-before\">\n<p><em>Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders.<\/em> <em>Subscribe Now<\/em><\/p>\n\n\n\n<hr class=\"wp-block-separator has-css-opacity is-style-wide\"\/>\n<\/div><p>OpenAI co-founder and <strong>CEO Sam Altman is publicly acknowledging major hiccups in yesterday\u2019s rollout of GPT-5<\/strong>, the company\u2019s new, flagship large language model (LLM) \u2014 advertised as its most powerful and capable yet. <\/p>\n\n\n\n<p>Answering user questions in a Reddit AMA (Ask Me Anything) thread and in a post on X this afternoon, <strong>Altman admitted to a range of issues that have disrupted the launch of GPT-5, including faulty model switching, poor performance, and user confusion<\/strong> \u2014 prompting OpenAI to partially walk back some of its platform changes and <strong>reinstate user access to earlier models like GPT-4o.<\/strong><\/p>\n\n\n\n<p><strong>\u201cIt was a little more bumpy than we hoped for,\u201d Altman wrote<\/strong> in reply to a question on Reddit regarding the big GPT-5 launch.<\/p>\n\n\n\n<p>As for erroneous model performance charts shown off during OpenAI\u2019s GPT-5 livestream, Altman said: <strong>\u201cPeople were working late and were very tired, and human error got in the way.<\/strong> A lot comes together for a livestream in the last hours.\u201d<\/p>\n\n\n\n<div id=\"boilerplate_2803147\" class=\"post-boilerplate boilerplate-speedbump\">\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<p><strong\/><strong>AI Scaling Hits Its Limits<\/strong><\/p>\n\n\n\n<p>Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Turning energy into a strategic advantage<\/li>\n\n\n\n<li>Architecting efficient inference for real throughput gains<\/li>\n\n\n\n<li>Unlocking competitive ROI with sustainable AI systems<\/li>\n<\/ul>\n\n\n\n<p><strong>Secure your spot to stay ahead<\/strong>: https:\/\/bit.ly\/4mwGngO<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n<\/div><p>While he noted the accompanying blog post and system card were accurate, the missteps further muddied a launch already facing scrutiny from early users and developers.<\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-rich is-provider-twitter wp-block-embed-twitter\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"twitter-tweet\" data-width=\"500\" data-dnt=\"true\"><p lang=\"en\" dir=\"ltr\">GPT-5 rollout updates:<\/p><p>*We are going to double GPT-5 rate limits for ChatGPT Plus users as we finish rollout.<\/p><p>*We will let Plus users choose to continue to use 4o. We will watch usage as we think about how long to offer legacy models for.<\/p><p>*GPT-5 will seem smarter starting\u2026<\/p>\u2014 Sam Altman (@sama) <a href=\"https:\/\/twitter.com\/sama\/status\/1953893841381273969?ref_src=twsrc%5Etfw\">August 8, 2025<\/a><\/blockquote>\n<\/div><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-problems-with-new-automatic-model-router\">Problems with new automatic model router<\/h2>\n\n\n\n<p>One key reason for the trouble according to Altman stems from OpenAI\u2019s new automatic \u201crouter\u201d that assigns user prompts to one of four GPT-5 variants \u2014 regular, mini, nano, and pro \u2014 with an optional \u201cthinking\u201d mode for heavier reasoning tasks. <\/p>\n\n\n\n<p>On X, <strong>Altman revealed that a key part of that system \u2014 the autoswitcher \u2014 was \u201cout of commission for a chunk of the day,\u201d causing GPT-5 to appear \u201cway dumber\u201d than intended.<\/strong><\/p>\n\n\n\n<p>In response, OpenAI says it\u2019s implementing changes to the model decision boundary and will make it more transparent which model is responding to a given query. <\/p>\n\n\n\n<p>A UI update is also on the way to help users manually trigger thinking mode.<\/p>\n\n\n\n<p>Additionally, Altman confirmed that <strong>OpenAI will now allow ChatGPT Plus users to continue using GPT-4o \u2014 the prior default model<\/strong> \u2014 after a wave of complaints about GPT-5\u2019s inconsistent performance. He said on Reddit the company is \u201c<strong>trying to gather more data on the tradeoffs\u201d before deciding how long to offer legacy models.<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-rich is-provider-twitter wp-block-embed-twitter\"\/>\n\n\n\n<p>Yet many users including OpenAI beta testers like Wharton School of Business professor Ethan Mollick expressed confused and dismay at <strong>OpenAI unilaterally upgrading their ChatGPT experiences to GPT-5 and initially taking away access to the older models.<\/strong><\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-real-world-performance-lags-behind-hype\">Real-world performance lags behind hype<\/h2>\n\n\n\n<p><strong>OpenAI\u2019s internal benchmarks may show GPT-5 leading the pack of LLMs<\/strong>, but real-world users are sharing a different experience. <\/p>\n\n\n\n<p>Since the launch, users have posted numerous examples of GPT-5 making basic errors in math, logic, and coding tasks.<\/p>\n\n\n\n<p>Data scientist Colin Fraser posted screenshots of GPT-5 incorrectly solving whether 8.888 repeating equals 9 (it does not, obviously), while another user showed it flubbing a simple algebra problem: 5.9 = x + 5.11.<\/p>\n\n\n\n<p>And still other users reported trouble getting accurate answers to math word problems or using GPT-5 to debug its own presentation charts.<\/p>\n\n\n\n<p>Developer feedback hasn\u2019t been much better, with <strong>users posting images of GPT faring worse at \u201cone-shot\u201d certain programming tasks<\/strong> \u2014 completing them well with a single-prompt \u2014 <strong>compared to rival AI lab Anthropic\u2019s new model Claude Opus 4.1. <\/strong><\/p>\n\n\n\n<p>And security firm SPLX found GPT-5 still suffers from serious vulnerabilities to prompt injection and obfuscated logic attacks unless its safety layer is hardened.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-openai-in-the-spotlight\">OpenAI in the spotlight<\/h2>\n\n\n\n<p>With 700 million weekly users on ChatGPT, OpenAI remains the largest player in generative AI by audience. <\/p>\n\n\n\n<p>But that scale has brought growing pains. Altman noted in his X post that <strong>API traffic doubled over 24 hours following the GPT-5 launch, contributing to platform instability.<\/strong><\/p>\n\n\n\n<p>In response, OpenAI says it will double rate limits for ChatGPT Plus users, and continue to tweak infrastructure as it gathers feedback. <\/p>\n\n\n\n<p>But the early missteps \u2014 compounded by confusing UX changes and errors in a high-profile launch \u2014 have <strong>opened a window for rivals to gain ground.<\/strong><\/p>\n\n\n\n<p>The pressure is on for OpenAI to prove that GPT-5 isn\u2019t just an incremental update, but a true step forward. Based on the initial rollout, many users aren\u2019t convinced \u2014 yet.<\/p>\n<div id=\"boilerplate_2660155\" class=\"post-boilerplate boilerplate-after\"><div class=\"Boilerplate__newsletter-container vb\">\n<div class=\"Boilerplate__newsletter-main\">\n<p><strong>Daily insights on business use cases with VB Daily<\/strong><\/p>\n<p class=\"copy\">If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.<\/p>\n<p class=\"Form__newsletter-legal\">Read our Privacy Policy<\/p>\n<p class=\"Form__success\" id=\"boilerplateNewsletterConfirmation\">\n\t\t\t\t\tThanks for subscribing. Check out more VB newsletters here.\n\t\t\t\t<\/p>\n<p class=\"Form__error\">An error occured.<\/p>\n<\/p><\/div>\n<div class=\"image-container\">\n\t\t\t\t\t<img decoding=\"async\" src=\"https:\/\/venturebeat.com\/wp-content\/themes\/vb-news\/brand\/img\/vb-daily-phone.png\" alt=\"\"\/>\n\t\t\t\t<\/div>\n<\/p><\/div>\n<\/div>\t\t\t<\/div><template id="5yhLMsa6DtvAowhU9FHL"></template><\/script>\r\n<br>\r\n<br><a href=\"https:\/\/venturebeat.com\/ai\/openai-returns-old-models-to-chatgpt-as-sam-altman-admits-bumpy-gpt-5-rollout\/\">Source link <\/a>","protected":false},"excerpt":{"rendered":"<p>Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now OpenAI co-founder and CEO Sam Altman is publicly acknowledging major hiccups in yesterday\u2019s rollout of GPT-5, the company\u2019s new, flagship large language model (LLM) \u2014 advertised as its most [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":3068,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[33],"tags":[],"class_list":["post-3067","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-automation"],"aioseo_notices":[],"jetpack_featured_media_url":"https:\/\/violethoward.com\/new\/wp-content\/uploads\/2025\/08\/ChatGPT-Image-Aug-8-2025-05_14_23-PM.png","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts\/3067","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/comments?post=3067"}],"version-history":[{"count":0,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts\/3067\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/media\/3068"}],"wp:attachment":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/media?parent=3067"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/categories?post=3067"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/tags?post=3067"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}<!-- This website is optimized by Airlift. Learn more: https://airlift.net. Template:. Learn more: https://airlift.net. Template: 69e302c146fa5c92dc28ac12. Config Timestamp: 2026-04-18 04:04:16 UTC, Cached Timestamp: 2026-04-29 18:45:18 UTC -->