{"id":1511,"date":"2025-05-10T12:00:43","date_gmt":"2025-05-10T12:00:43","guid":{"rendered":"https:\/\/violethoward.com\/new\/openai-introduces-reinforcement-fine-tuning-for-o4-model\/"},"modified":"2025-05-10T12:00:43","modified_gmt":"2025-05-10T12:00:43","slug":"openai-introduces-reinforcement-fine-tuning-for-o4-model","status":"publish","type":"post","link":"https:\/\/violethoward.com\/new\/openai-introduces-reinforcement-fine-tuning-for-o4-model\/","title":{"rendered":"OpenAI introduces reinforcement fine-tuning for o4 model"},"content":{"rendered":" \r\n
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More<\/em><\/p>\n\n\n\n OpenAI today announced on its developer-focused account on the social network X that third-party software developers outside the company can now access reinforcement fine-tuning (RFT) for its new o4-mini language reasoning model. This enables them to customize a new, private version of it based on their enterprise\u2019s unique products, internal terminology, goals, employees, processes and more.<\/p>\n\n\n\n Essentially, this capability lets developers take the model available to the general public and tweak it to better fit their needs using OpenAI\u2019s platform dashboard.<\/p>\n\n\n\n Then, they can deploy it through OpenAI\u2019s application programming interface (API), another part of its developer platform, and connect it to their internal employee computers, databases, and applications.<\/p>\n\n\n\n Once deployed, if an employee or leader at the company wants to use it through a custom internal chatbot or custom OpenAI GPT to pull up private, proprietary company knowledge, answer specific questions about company products and policies, or generate new communications and collateral in the company\u2019s voice, they can do so more easily with their RFT version of the model.<\/p>\n\n\n\n However, one cautionary note: research has shown that fine-tuned models may be more prone to jailbreaks and hallucinations, so proceed cautiously!<\/p>\n\n\n\n This launch expands the company\u2019s model optimization tools beyond supervised fine-tuning (SFT) and introduces more flexible control for complex, domain-specific tasks. <\/p>\n\n\n\n Additionally, OpenAI announced that supervised fine-tuning is now supported for its GPT-4.1 nano model, the company\u2019s most affordable and fastest offering to date.<\/p>\n\n\n\n RFT creates a new version of OpenAI\u2019s o4-mini reasoning model that is automatically adapted to the user\u2019s or their enterprise\/organization\u2019s goals.<\/p>\n\n\n\n It does so by applying a feedback loop during training, which developers at large enterprises (or even independent developers working independently) can now initiate relatively simply, easily and affordably through OpenAI\u2019s online developer platform.<\/p>\n\n\n\n Instead of training on a set of questions with fixed correct answers \u2014 which is what traditional supervised learning does \u2014 RFT uses a grader model to score multiple candidate responses per prompt.<\/p>\n\n\n\n The training algorithm then adjusts model weights to make high-scoring outputs more likely.<\/p>\n\n\n\n This structure allows customers to align models with nuanced objectives such as an enterprise\u2019s \u201chouse style\u201d of communication and terminology, safety rules, factual accuracy, or internal policy compliance.<\/p>\n\n\n\n To perform RFT, users need to:<\/p>\n\n\n\n RFT currently supports only o-series reasoning models and is available for the o4-mini model.<\/p>\n\n\n\n On its platform, OpenAI highlighted several early customers who have adopted RFT across diverse industries:<\/p>\n\n\n\n These cases often shared characteristics: clear task definitions, structured output formats and reliable evaluation criteria\u2014all essential for effective reinforcement fine-tuning.<\/p>\n\n\n\n RFT is available now to verified organizations. To help improve future models, OpenAI offers teams that share their training datasets with OpenAI a 50% discount. Interested developers can get started using OpenAI\u2019s RFT documentation and dashboard.<\/p>\n\n\n\n Unlike supervised or preference fine-tuning, which is billed per token, RFT is billed based on time spent actively training. Specifically:<\/p>\n\n\n\n Here is an example cost breakdown:<\/p>\n\n\n\n This pricing model provides transparency and rewards efficient job design. To control costs, OpenAI encourages teams to:<\/p>\n\n\n\n OpenAI uses a billing method called \u201ccaptured forward progress,\u201d meaning users are only billed for model training steps that were successfully completed and retained.<\/p>\n\n\n\n Reinforcement fine-tuning introduces a more expressive and controllable method for adapting language models to real-world use cases. <\/p>\n\n\n\n With support for structured outputs, code-based and model-based graders, and full API control, RFT enables a new level of customization in model deployment. OpenAI\u2019s rollout emphasizes thoughtful task design and robust evaluation as keys to success.<\/p>\n\n\n\n Developers interested in exploring this method can access documentation and examples via OpenAI\u2019s fine-tuning dashboard. <\/p>\n\n\n\n For organizations with clearly defined problems and verifiable answers, RFT offers a compelling way to align models with operational or compliance goals \u2014 without building RL infrastructure from scratch.<\/p>\n
\n<\/div>How does Reinforcement Fine-Tuning (RFT) help organizations and enterprises?<\/h2>\n\n\n\n
\n
Early enterprise use cases<\/h2>\n\n\n\n
\n
Pricing and billing structure<\/h2>\n\n\n\n
\n
Scenario<\/strong><\/th> Billable Time<\/strong><\/th> Cost<\/strong><\/th><\/tr><\/thead> 4 hours training<\/td> 4 hours<\/td> $400<\/td><\/tr> 1.75 hours (prorated)<\/td> 1.75 hours<\/td> $175<\/td><\/tr> 2 hours training + 1 hour lost (due to failure)<\/td> 2 hours<\/td> $200<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n \n
So should your organization invest in RFTing a custom version of OpenAI\u2019s o4-mini or not?<\/h2>\n\n\n\n