OpenAI Unveils GPT-4 Turbo and Fine-Tuning Program for GPT-4

During its inaugural developer conference, OpenAI introduced GPT-4 Turbo, an enhanced iteration of its flagship text-generating AI model, GPT-4, touting both increased potency and cost-effectiveness. GPT-4 Turbo is presented in two variations: one designed solely for text analysis and another that comprehends both text and images. The text analysis model is currently available in preview via an API, with OpenAI planning a broader release in the coming weeks.

Open AI dev day

Pricing for GPT-4 Turbo is set at $0.01 per 1,000 input tokens (~750 words) and $0.03 per 1,000 output tokens. The cost of the image-processing GPT-4 Turbo will depend on image size, with a 1080×1080-pixel image incurring a cost of $0.00765.

OpenAI emphasizes a significant pricing advantage with GPT-4 Turbo, offering a 3x lower price for input tokens and 2x lower for output tokens compared to GPT-4.

Notable improvements in GPT-4 Turbo include an updated knowledge base, expanding its information repository to provide more accurate responses. GPT-4 Turbo was trained on data until April 2023, ensuring that recent events up to that cut-off date are better addressed.

Furthermore, GPT-4 Turbo introduces a substantial enhancement in its context window, allowing it to consider a 128,000-token context, quadrupling the capacity of GPT-4. This significant expansion ensures more coherent and contextually relevant responses during interactions.

OpenAI also introduces a “JSON mode” feature in GPT-4 Turbo, ensuring that the model produces valid JSON data interchange format. This feature is particularly valuable for web applications requiring accurate data transmission.

OpenAI’s commitment extends to fine-tuning GPT-4, which comes with enhanced oversight and guidance, reflecting the technical complexity of the process. It is seen as a crucial step to achieve meaningful enhancements over the base model, a task that proved more challenging with GPT-4 compared to its predecessor, GPT-3.5.

OpenAI is also doubling the tokens-per-minute rate limit for all paying GPT-4 customers while keeping pricing consistent. The GPT-4 model will continue to be offered at $0.03 per input token and $0.06 per output token for the 8,000-token context window version, and $0.06 per input token and $0.012 per output token for the 32,000-token context window version.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top