{"id":1275,"date":"2025-04-18T08:32:15","date_gmt":"2025-04-18T08:32:15","guid":{"rendered":"https:\/\/violethoward.com\/new\/googles-gemini-2-5-flash-introduces-thinking-budgets-that-cut-ai-costs-by-600-when-turned-down\/"},"modified":"2025-04-18T08:32:15","modified_gmt":"2025-04-18T08:32:15","slug":"googles-gemini-2-5-flash-introduces-thinking-budgets-that-cut-ai-costs-by-600-when-turned-down","status":"publish","type":"post","link":"https:\/\/violethoward.com\/new\/googles-gemini-2-5-flash-introduces-thinking-budgets-that-cut-ai-costs-by-600-when-turned-down\/","title":{"rendered":"Google’s Gemini 2.5 Flash introduces ‘thinking budgets’ that cut AI costs by 600% when turned down"},"content":{"rendered":" \r\n
\n\t\t\t\t
\n

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More<\/em><\/p>\n\n\n\n


\n<\/div>

Google has launched Gemini 2.5 Flash, a major upgrade to its AI lineup that gives businesses and developers unprecedented control over how much \u201cthinking\u201d their AI performs. The new model, released today in preview through Google AI Studio and Vertex AI, represents a strategic effort to deliver improved reasoning capabilities while maintaining competitive pricing in the increasingly crowded AI market.<\/p>\n\n\n\n

The model introduces what Google calls a \u201cthinking budget\u201d \u2014 a mechanism that allows developers to specify how much computational power should be allocated to reasoning through complex problems before generating a response. This approach aims to address a fundamental tension in today\u2019s AI marketplace: more sophisticated reasoning typically comes at the cost of higher latency and pricing.<\/p>\n\n\n\n

\u201cWe know cost and latency matter for a number of developer use cases, and so we want to offer developers the flexibility to adapt the amount of the thinking the model does, depending on their needs,\u201d said Tulsee Doshi, Product Director for Gemini Models at Google DeepMind, in an exclusive interview with VentureBeat.<\/p>\n\n\n\n

This flexibility reveals Google\u2019s pragmatic approach to AI deployment as the technology increasingly becomes embedded in business applications where cost predictability is essential. By allowing the thinking capability to be turned on or off, Google has created what it calls its \u201cfirst fully hybrid reasoning model.\u201d<\/p>\n\n\n\n

Pay only for the brainpower you need: Inside Google\u2019s new AI pricing model<\/h2>\n\n\n\n

The new pricing structure highlights the cost of reasoning in today\u2019s AI systems. When using Gemini 2.5 Flash, developers pay $0.15 per million tokens for input. Output costs vary dramatically based on reasoning settings: $0.60 per million tokens with thinking turned off, jumping to $3.50 per million tokens with reasoning enabled.<\/p>\n\n\n\n

This nearly sixfold price difference for reasoned outputs reflects the computational intensity of the \u201cthinking\u201d process, where the model evaluates multiple potential paths and considerations before generating a response.<\/p>\n\n\n\n

\u201cCustomers pay for any thinking and output tokens the model generates,\u201d Doshi told VentureBeat. \u201cIn the AI Studio UX, you can see these thoughts before a response. In the API, we currently don\u2019t provide access to the thoughts, but a developer can see how many tokens were generated.\u201d<\/p>\n\n\n\n

The thinking budget can be adjusted from 0 to 24,576 tokens, operating as a maximum limit rather than a fixed allocation. According to Google, the model intelligently determines how much of this budget to use based on the complexity of the task, preserving resources when elaborate reasoning isn\u2019t necessary.<\/p>\n\n\n\n

How Gemini 2.5 Flash stacks up: Benchmark results against leading AI models<\/h2>\n\n\n\n

Google claims Gemini 2.5 Flash demonstrates competitive performance across key benchmarks while maintaining a smaller model size than alternatives. On Humanity\u2019s Last Exam, a rigorous test designed to evaluate reasoning and knowledge, 2.5 Flash scored 12.1%, outperforming Anthropic\u2019s Claude 3.7 Sonnet (8.9%) and DeepSeek R1 (8.6%), though falling short of OpenAI\u2019s recently launched o4-mini (14.3%).<\/p>\n\n\n\n

The model also posted strong results on technical benchmarks like GPQA diamond (78.3%) and AIME mathematics exams (78.0% on 2025 tests and 88.0% on 2024 tests).<\/p>\n\n\n\n

\u201cCompanies should choose 2.5 Flash because it provides the best value for its cost and speed,\u201d Doshi said. \u201cIt\u2019s particularly strong relative to competitors on math, multimodal reasoning, long context, and several other key metrics.\u201d<\/p>\n\n\n\n

Industry analysts note that these benchmarks indicate Google is narrowing the performance gap with competitors while maintaining a pricing advantage \u2014 a strategy that may resonate with enterprise customers watching their AI budgets.<\/p>\n\n\n\n

Smart vs. speedy: When does your AI need to think deeply?<\/h2>\n\n\n\n

The introduction of adjustable reasoning represents a significant evolution in how businesses can deploy AI. With traditional models, users have little visibility into or control over the model\u2019s internal reasoning process.<\/p>\n\n\n\n

Google\u2019s approach allows developers to optimize for different scenarios. For simple queries like language translation or basic information retrieval, thinking can be disabled for maximum cost efficiency. For complex tasks requiring multi-step reasoning, such as mathematical problem-solving or nuanced analysis, the thinking function can be enabled and fine-tuned.<\/p>\n\n\n\n

A key innovation is the model\u2019s ability to determine how much reasoning is appropriate based on the query. Google illustrates this with examples: a simple question like \u201cHow many provinces does Canada have?\u201d requires minimal reasoning, while a complex engineering question about beam stress calculations would automatically engage deeper thinking processes.<\/p>\n\n\n\n

\u201cIntegrating thinking capabilities into our mainline Gemini models, combined with improvements across the board, has led to higher quality answers,\u201d Doshi said. \u201cThese improvements are true across academic benchmarks \u2013 including SimpleQA, which measures factuality.\u201d<\/p>\n\n\n\n

Google\u2019s AI week: Free student access and video generation join the 2.5 Flash launch<\/h2>\n\n\n\n

The release of Gemini 2.5 Flash comes during a week of aggressive moves by Google in the AI space. On Monday, the company rolled out Veo 2 video generation capabilities to Gemini Advanced subscribers, allowing users to create eight-second video clips from text prompts. Today, alongside the 2.5 Flash announcement, Google revealed that all U.S. college students will receive free access to Gemini Advanced until spring 2026 \u2014 a move interpreted by analysts as an effort to build loyalty among future knowledge workers.<\/p>\n\n\n\n

These announcements reflect Google\u2019s multi-pronged strategy to compete in a market dominated by OpenAI\u2019s ChatGPT, which reportedly sees over 800 million weekly users compared to Gemini\u2019s estimated 250-275 million monthly users, according to third-party analyses.<\/p>\n\n\n\n

The 2.5 Flash model, with its explicit focus on cost efficiency and performance customization, appears designed to appeal particularly to enterprise customers who need to carefully manage AI deployment costs while still accessing advanced capabilities.<\/p>\n\n\n\n

\u201cWe\u2019re super excited to start getting feedback from developers about what they\u2019re building with Gemini Flash 2.5 and how they\u2019re using thinking budgets,\u201d Doshi said.<\/p>\n\n\n\n

Beyond the preview: What businesses can expect as Gemini 2.5 Flash matures<\/h2>\n\n\n\n

While this release is in preview, the model is already available for developers to start building with, though Google has not specified a timeline for general availability. The company indicates it will continue refining the dynamic thinking capabilities based on developer feedback during this preview phase.<\/p>\n\n\n\n

For enterprise AI adopters, this release represents an opportunity to experiment with more nuanced approaches to AI deployment, potentially allocating more computational resources to high-stakes tasks while conserving costs on routine applications.<\/p>\n\n\n\n

The model is also available to consumers through the Gemini app, where it appears as \u201c2.5 Flash (Experimental)\u201d in the model dropdown menu, replacing the previous 2.0 Thinking (Experimental) option. This consumer-facing deployment suggests Google is using the app ecosystem to gather broader feedback on its reasoning architecture.<\/p>\n\n\n\n

As AI becomes increasingly embedded in business workflows, Google\u2019s approach with customizable reasoning reflects a maturing market where cost optimization and performance tuning are becoming as important as raw capabilities \u2014 signaling a new phase in the commercialization of generative AI technologies.<\/p>\n