You are currently viewing OpenAI rolls out its most cost-efficient small model GPT-4o mini

OpenAI rolls out its most cost-efficient small model GPT-4o mini


OpenAI has introduced the GPT-4o mini, its latest cost-efficient and small model. The Sam Altman-backed company aims to significantly broaden the range of AI applications by making intelligence more affordable. 

The latest model scores 82% on MMLU (Massive Multitask Language Understanding) and currently outperforms GPT-4 on chat preferences in the LMSYS (Language Model Systems) leaderboard. It is priced at $0.15 per million input tokens and $0.60 per million output tokens, making it much more affordable than earlier models and over 60% cheaper than GPT-3.5 Turbo.

MMLU is a benchmark that tests a model’s performance across various subjects. The LMSYS leaderboard ranks AI models based on their performance in various language-related tasks.

“Towards intelligence too cheap to meter: 15 cents per million input tokens, 60 cents per million output tokens, MMLU of 82%, and fast. Most importantly, we think people will really, really like using the new model,” Sam Altman, CEO of OpenAI, posted on X. 

<div class="tweet embed" contenteditable="false" id="1813984333352649087" data-id="1813984333352649087" data-url="https://x.com/sama/status/1813984333352649087" data-html="

towards intelligence too cheap to meter:https://t.co/76GEqATfws

15 cents per million input tokens, 60 cents per million output tokens, MMLU of 82%, and fast.

most importantly, we think people will really, really like using the new model.

&mdash; Sam Altman (@sama) July 18, 2024

” data-type=”tweet” align=”center”>

Starting today, ChatGPT Free, plus, and team users can access GPT-4o mini instead of GPT-3.5. The company said that enterprise users will gain access starting next week.

@media (max-width: 769px) {
.thumbnailWrapper{
width:6.62rem !important;
}
.alsoReadTitleImage{
min-width: 81px !important;
min-height: 81px !important;
}

.alsoReadMainTitleText{
font-size: 14px !important;
line-height: 20px !important;
}

.alsoReadHeadText{
font-size: 24px !important;
line-height: 20px !important;
}
}

Also Read

OpenAI rolls out CriticGPT to spot errors in ChatGPT’s code output

“GPT-4o mini enables a broad range of tasks with its low cost and latency, such as applications that chain or parallelize multiple model calls (e.g., calling multiple APIs), pass a large volume of context to the model (e.g., full code base or conversation history), or interact with customers through fast, real-time text responses (e.g., customer support chatbots),” read the company’s blogpost. 

GPT-4o mini currently supports text and vision in the API. OpenAI stated that it plans to include support for text, image, video, and audio inputs and outputs. It also outperforms GPT-3.5 Turbo and other small models in both textual intelligence and multimodal reasoning benchmarks. 

It supports the same languages as GPT-4o and excels in function calling, enabling developers to create applications that interact with external systems. 


Edited by Kanishk Singh





Source link

Leave a Reply