TechCrunch – July 7, 2023
OpenAI today announced the general availability of GPT-4, its latest text-generating model, through its API.
Starting this afternoon, all existing OpenAI API developers “with a history of successful payments” can access GPT-4. The company plans to open up access to new developers by the end of this month, and then start raising availability limits after that “depending on compute availability.”
“Millions of developers have requested access to the GPT-4 API since March, and the range of innovative products leveraging GPT-4 is growing every day,” OpenAI wrote in a blog post. “We envision a future where chat-based models can support any use case. ”
GPT-4 can generate text (including code) and accept image and text inputs — an improvement over GPT-3.5, its predecessor, which only accepted text — and performs at “human level” on various professional and academic benchmarks. Like previous GPT models from OpenAI, GPT-4 was trained using publicly available data, including from public web pages, as well as data that OpenAI licensed.
The image-understanding capability isn’t available to all OpenAI customers just yet. OpenAI’s testing it with a single partner, Be My Eyes, to start with. But it hasn’t indicated when it’ll open it up to the wider customer base.
It’s worth noting that, as with even the best generative AI models today, GPT-4 isn’t perfect. It “hallucinates” facts and makes reasoning errors, sometimes with confidence. And it doesn’t learn from its experience, failing at hard problems such as introducing security vulnerabilities into code it generates.
In the future, OpenAI says that it’ll allow developers to fine-tune GPT-4 and GPT-3.5 Turbo, one of its other recent but less capable text-generating models (and one of the original models powering ChatGPT), with their own data, as has long been possible with several of OpenAI’s other text-generating models. That capability should arrive later this year, according to OpenAI.
Since the reveal of GPT-4 in March, the generative AI competition has grown fiercer. Recently, Anthropic expanded the context window for Claude — its flagship text-generating AI model, still in preview — from 9,000 tokens to 100,000 tokens. (Context window refers to the text the model considers before generating additional text, while tokens represent raw text — e.g. the word “fantastic” would be split into the tokens “fan,” “tas” and “tic.”)
GPT-4 held the previous crown in terms of context window, weighing in at 32,000 tokens on the high end. Generally speaking, models with small context windows tend to “forget” the content of even very recent conversations, leading them to veer off topic.
In a related announcement today, OpenAI said that it’s making its DALL-E 2 and Whisper APIs generally available — DALL-E 2 being OpenAI’s image-generating model and “Whisper” referring to the company’s speech-to-text model. The company also said that it plans to deprecate old models available through its API to “optimize [its] compute capacity.” (Over the past few months, thanks in large part to the exploding popularity of ChatGPT, OpenAI has struggled to keep up with demand for its generative models.)
Starting January 4, 2024, certain older OpenAI models — specifically GPT-3 and its derivatives — will no longer be available, and will be replaced with new “base GPT-3” models that one would presume are more compute efficient. Developers using the old models will have to manually upgrade their integrations by January 4, and those who wish to continue using fine-tuned old models beyond January 4 will need to fine-tune replacements atop the new base GPT-3 models.
“We will be providing support to users who previously fine-tuned models to make this transition as smooth as possible,” OpenAI wrote. “In the coming weeks, we will reach out to developers who have recently used these older models, and will provide more information once the new completion models are ready for early testing.”