The tech world has been waiting eagerly for this: OpenAI has officially presented GPT-4 and is making many improvements available immediately. The new understanding of the AI ​​for images is particularly exciting, but access will be very limited for the foreseeable future.

GPT-4: Access will be very limited for months

First things first: As expected, OpenAI will initially restrict access to GPT in version 4 to Plus subscribers. As the developers emphasize, the capacity of the new system will also be “severely limited” for paying customers, which is why there is an upper limit for requests at the start. “Depending on how traffic develops, we may introduce a new subscription tier for using GPT-4 on a larger scale,” the company said Blog post introducing GPT-4.

However, this step can only be taken when the new language model has been further optimized and the infrastructure has been expanded. According to OpenAi, it will probably take “several months” before the capacity for requests can be significantly increased. And so the company currently does not want to comment on a possible free access. But it already seems clear: relatively unrestricted requests, as they are known to users of GPT-3.5, will no longer exist in the future. “We hope to be able to offer a certain number of free GPT 4 queries at some point,” says OpenAi.

Exciting combination of text and image

GPT-4 is said to be significantly superior to the previous model, especially for complex queries. The language model is more reliable and able to “process much more differentiated instructions than GPT-3.5.” One of the most important advances: so-called “AI hallucinations”, i.e. moments in which the system freely invents facts, are said to have been significantly reduced. Compared to benchmarks developed for machine learning models, you want to be able to clearly beat all major language models.

What is really new, however, is the ability to generate text output such as speech or code from input that consists of text as well as images – we have summarized some examples in the gallery above that show that the AI ​​also handles complex queries well. However, the availability here is much more restricted: “The image inputs are still a preliminary version for research purposes and are not publicly accessible,” says OpenAi. It also announces that it is initially working exclusively with the company to “prepare for wider availability”. Be My Eyes works together.

What is Microsoft doing?

Microsoft Germany spoke for the first time about GPT-4 at an event last week. Andreas Braun, Chief Technical Officer, caused a stir when he announced that video-related functions would also be part of the update. In the official announcement of OpenAI, however, there is no mention of such features.

The next AI event of the group is already scheduled for March 16th, where information about the integration of AI in Office tools such as Word, Outlook & Co. will be provided at an online event entitled “The Future of Work with AI”. want. It can be assumed that Microsoft will also provide information about plans for GPT-4 here.

See also:


ChatGPT, OpenAI

California18

Welcome to California18, your number one source for Breaking News from the World. We’re dedicated to giving you the very best of News.

Leave a Reply