Microsoft-backed startup OpenAI began the rollout of GPT-4, a powerful artificial intelligence model that succeeds the technology behind the wildly popular ChatGPT. GPT-4 is “multimodal“, which means it can generate content from both image and text prompts.
What is the difference between GPT-4 and GPT-3.5?
GPT-3.5 takes only text prompts and the latest version of the large language model can also use images as inputs to recognize objects in a picture and analyze them.
GPT-3.5 is limit to about 3,000-word responses and GPT-4 can generate responses of more than 25,000 words.
GPT-4 is 82% less likely to respond to requests for disallow content than its predecessor and scores 40% higher on certain tests of factuality.
It will also let developers decide their AI’s style of tone and verbosity.
Lets take an example, GPT-4 can assume a Socratic style of conversation and respond to questions with questions.
The previous iteration of the technology had a fix tone and style.
Soon ChatGPT users will have the option to change the chatbot’s tone and style of responses, as per OpenAI said.
Capabilities of GPT-4
The latest version has outperform its predecessor in the US bar exam and the Graduate Record Examination (GRE).
GPT-4 can also help individuals calculate their taxes, a demonstration by Greg Brockman, OpenAI’s president, show.
The demo show GPT-4 could take a photo of a hand-drawn mock-up for a simple website and create a real one.
Be My Eyes, an app that caters to visually impair people, will provide a virtual volunteer tool power by GPT-4 on its app.
Limitations of GPT-4
As per OpenAI, GPT-4 has similar limitations as its prior versions and is “less capable than humans in many real-world scenarios“.
Inaccurate responses known as “hallucinations” have a challenge for many AI programs, including GPT-4.
OpenAI said GPT-4 can rival human propagandists in many domains, especially when team up with a human editor.
It cite an example where GPT-4 came up with suggestions that seem plausible, when it was ask about how to get two parties to disagree with each other.
OpenAI Chief Executive Officer Sam Altman said GPT-4 was “most capable and align” with human values and intent, though “it is still flaw.”
GPT-4 generally lacks knowledge of events that occur after September 2021, when the vast majority of its data was cut off.
It also does not learn from experience.
Who can access to GPT-4?
GPT-4 can process both text and image inputs, only the text-input feature will be available to ChatGPT Plus subscribers and software developers, with a waitlist, while the image-input ability is not publicly available yet.
The subscription plan, which offers faster response time and priority access to new features and improvements, was launch in February and costs $20 per month.
GPT-4 powers Microsoft’s Bing AI chatbot and some features on language learning platform Duolingo’s subscription tier.