OpenAI Announces Realtime API, Prompt Coaching and Vision Fine-Tuning on GPT-4o for Developers

4 hours ago 1

OpenAI hosted its annual DevDay conference in San Francisco on Tuesday and announced several new upgrades to the application programming interface (API) version of ChatGPT, which can be remodelled and fine-tuned to power other applications and software. Among them, the major introductions are the realtime API, prompt coaching, and vision fine-tuning with GPT-4o. The company is also making the process of model distillation easier for developers. OpenAI also announced the completion of its funding round and stated it raised $6.6 billion (roughly Rs. 55 thousand crore) during the event.

OpenAI Announces New Features for Developers

In several blog posts, the AI firm highlighted the new features and tools for developers. The first is realtime API which will be available to the paid subscribers of ChatGPT API. This new capability offers a low-latency multimodal experience, allowing speech-to-speech conversations similar to the ChatGPT Advanced Voice Mode. Developers can also make use of the six preset voices that were earlier added to the API.

?️ Introducing the Realtime API—build speech-to-speech experiences into your applications. Like ChatGPT's Advanced Voice, but for your own app. Rolling out in beta for developers on paid tiers. https://t.co/LQBC33Y22U pic.twitter.com/udDhTodwKl

— OpenAI Developers (@OpenAIDevs) October 1, 2024

Another new introduction is the prompt coaching capability in the API. OpenAI is introducing this feature as a way for developers to save costs on prompts which are frequently used. The company noticed that developers usually keep sending the same input prompts when editing a codebase or having a multi-turn conversation with the chatbot. With prompt coaching, they can now reuse recently used input prompts at a discounted rate. The processing for the same will also be faster. The new rates can be checked here.

The GPT-4o model can also be fine-tuned for vision-related tasks. Developers can customise the large language model (LLM) by training it on a fixed set of visual data and improving its output efficiency. As per the blog post, the performance of GPT-4o for vision tasks can be improved with as few as 100 images.

Finally, the company is also making the process of model distillation easier for developers. Model distillation is the process of building smaller, fine-tuned AI models from a larger language model. Earlier, the process was convoluted and required taking a multi-step approach. Now, OpenAI is offering new tools such as Stored Completions (to easily generate distillation datasets), Evals (to run custom evaluations and measure performance), and Fine-Tuning (fine-tuning the smaller models directly after running an Eval).

Notably, all of these features are currently available in beta and will be available to all developers using the paid version of the API at a later date. Further, the company said it will be taking steps to further reduce the costs of input and output tokens.

Read Entire Article