GPT-4 is here -live demo today!

With broader knowledge and skills, multimodal, with the capacity to process bigger texts and images, with programmatic access soon, not too expensive, and rolling out from today.

I just received an e-mail from OpenAI explaining that GPT-4 is here, that it will be multimodal as anticipated (i.e. accepting text and images as input), that (of course) it outperforms ChatGPT, that it will be available through OpenAI’s API, and… that they will feature a live demo for developers today at 1 pm PDT!

As anticipated by its partner MicroSoft last week, OpenAI released today GPT-4, our most capable model. And they are starting to roll it out to API users already today. In fact, if you are a paying user of ChatGPT plus, you can already try GPT-4 on your account right now (while for other users there’s a waitlist already in place).

Official release

The official OpenAI page for the release claims that GPT-4 can solve difficult problems with greater accuracy than ChatGPT and other OpenAI models thanks to its broader general knowledge and its advanced “reasoning” capabilities. The release page also explains that GPT-4 can process blocks of text 8 times larger than those that ChatGPT accepts. That’s around 24,000 words, or 8K tokens. Moreover, GPT-4 accepts not only text inputs but also images, which it can understand and describe in logical ways. A video on the release…

--

--

LucianoSphere (Luciano Abriata, PhD)

https://www.lucianoabriata.com | Scientific writing, technology integrator, programming, biotech, bioinformatics.| Have a job for me? Contact me in ES FR EN IT