During the very first Developer Conference, OpenAI launched a new model called GPT-4 Turbo. It is better than the GPT-4 model in every way and brings tons of new changes that developers and general users have been asking for for a long time. In addition, the new model has been updated until April 2023 and is much cheaper to run. Read on to learn more about OpenAI’s GPT-4 Turbo model.
The GPT-4 turbo model is here!
GPT-4 Turbo model supports the largest context window of 128K, which is even higher than Claude’s context length of 100K. OpenAIs GPT-4 model was generally available with a maximum token size of 8K and 32K for select users. According to OpenAI, the new model can now process more than 300 pages of a book in one go, which is impressive.
Not to forget: OpenAI finally has the knowledge limit until April 2023 on the GPT-4 Turbo model. On the user side, it has also improved the ChatGPT experience and users can start using the GPT-4 Turbo model starting today. The amazing thing is that you don’t have to select a particular mode to accomplish a task. ChatGPT can now smartly choose what to use when needed. It can browse the web, use a plugin, analyze code and more – all in one mode.
Many new things have been announced for developers. First, the company has launched a new one text-to-speech (TTS) model. It generates incredibly natural speech in 6 different presets. Additionally, OpenAI has released the next version of its open-source speech recognition model, Whisper V3and it will be available via the API soon.
What’s interesting is that APIs for Dall -E 3, GPT-4 Turbo with Visionand the new TTS model were released today. Coke today launches a Diwali campaign that allows customers to generate Diwali cards using the Dall -E 3 API. Moving on, there is a JSON mode that allows the model to respond with a valid JSON output.
Additionally, Function Calling has also been improved on the newer model. OpenAI also gives developers the opportunity to have more control over the model. You can now set the Seed parameter to obtain consistent and reproducible results.
In terms of refinement support, developers can now request GPT-4 refinement under the Experimental Access program. GPT-4 has been upgraded to a higher speed limit: double the token limit/minute. Finally, regarding price, the GPT-4 Turbo model is significantly cheaper than the GPT-4. It costs 1 cent for 1,000 input tokens and 3 cents for 1,000 output tokens. Effective, GPT-4 Turbo is 2.75x cheaper than GPT-4.
What do you think of the new GPT-4 Turbo model? Let us know in the comment section below.