The GPT-5 Cheat Sheet: 13 Things to Know About OpenAI’s Latest AI Leap

In brief

  • GPT-5 launched with unified multimodal powers—text, images, voice, and live video—all in one package; no more juggling separate bots for every task.
  • Rollout starts today for all ChatGPT users, but power features and top speed go to Pro subscribers; Microsoft plugs it into Copilot and GitHub on day one.
  • OpenAI touts “expert-level reasoning” and a memory that never sleeps—plus major upgrades to coding, creative writing, and reliability.

OpenAI unveiled GPT-5 during a Thursday livestream, marking what the company called a qualitative shift in artificial intelligence capability after several months of anticipation and multiple delays. The model is being rolled out to all ChatGPT users throughout the day today.

The release represents OpenAI’s attempt to unify its various AI technologies into a single system. The company described reasoning as central to its artificial general intelligence strategy, with the breakthrough eliminating previous trade-offs between speed and analytical depth. Users no longer need to choose between fast responses and deep reasoning capabilities—GPT-5 delivers both simultaneously.

Here’s a cheatsheet on what you need to know.

1. When can I get it?

GPT-5 rolls out today on ChatGPT and via its API. Microsoft has also incorporated GPT-5 into its products immediately, making it available through Copilot and GitHub Copilot.

If you updated your Edge browser with Copilot, you should be ready to use it now.

2. Does everyone get the same version?

Yes, sort of: Free tier users will start with the standard GPT-5 before transitioning to a lighter “GPT-5 mini” version when they deplete their usage quota. Pro subscribers ($200 a month) get unlimited access to the full model while Plus subscribers ($20/month) get access to standard GPT-5.

Pro subscribers can run GPT-5 at its highest intelligence level with additional features like early access to its advanced agents, unlimited usage, more capabilities for deep research, priority access, and advanced voice mode with higher limits for video and screen sharing.

3. What does multimodal mean? Does a separate image generator go away?

Multimodal means GPT-5 can process and generate different types of content—text, images, voice, and now even video—all within the same conversation. The model showed enhanced foreign language understanding for complex tasks, generating complete websites with French words and proper pronunciation.

Instead of juggling between Vision, Sora, GPT, and the “o” models to reason, GPT-5 can do everything on its own.

4. How big is the context window and why does it matter?

GPT-5 has a 256,000 token context window for input, with the API accepting up to 272,000 input tokens and emitting a maximum of 128,000 reasoning and output tokens, for a total context length of 400,000 tokens.

This means it can process roughly 200,000 words at once—equivalent to a long novel. The larger context window allows GPT-5 to maintain coherent conversations over much longer interactions and analyze entire codebases or lengthy documents without losing track of important details.

That said, this window is not very big by today’s standards. Just for context, Gemini 2.5 is capable of handling 1 million tokens,

5. What new features does it have?

None, really, but some of its skills are upgraded to such a degree that they will feel like new features.

6. So what’s so great about it?

GPT-5 is more powerful in just about every way. For instance, it demonstrated remarkable coding capabilities during the presentation, writing over 400 lines of code in two minutes when prompted to create a Bernoulli effect simulation from scratch. Other cool things shown off in the demo:

  • Voice interactions sound less robotic and live video capabilities were introduced that match competitors like Gemini Live.
  • The model can now analyze uploaded images and incorporate them into its responses.
  • It’s better at agentic tasks and is supposedly able to handle real-world applications and explain its reasoning.
  • Next week users will be able to integrate Gmail and Google calendar, which will allow it to be a much better assistant.

7. Has pricing changed?

ChatGPT subscription pricing remains unchanged at $20/month for Plus and $200/month for Pro.

For API users, GPT-5 costs $1.25 per million input tokens and $10.00 per million output tokens for the standard model. GPT-5 mini costs $0.25 per million input tokens and $2.00 per million output tokens, while GPT-5 nano runs $0.05 for input and $0.40 for output.

This would make the model competitive against offerings from other companies and even cheaper than other models from OpenAI like GPT-4.1 or OpenAI o1 pro which costs a whopping $600 per million tokens.

8. Are we at AGI yet?

No. However, the company positioned reasoning as “at the heart of our AGI program.”

The model represents significant progress but remains focused on specific tasks rather than matching human intelligence across all domains. For instance, GPT-5 is great at language tasks but lacks the general intelligence required to perform a wide range of activities independently. It’s not yet self-teaching or self-adapting.

9. Can GPT-5 generate videos?

Not yet. While video generation wasn’t included in the initial release, OpenAI has Sora for video creation as a separate product.

CEO Sam Altman previously indicated that future versions would support video “eventually.”

The current version does understand live video, however, so it could watch you try to fix a bike and provide live instructions.

10. How reliable is it compared to previous models?

OpenAI reported that GPT-5 is “significantly less deceptive” than previous models, addressing one of the most persistent challenges in large language model deployment.

On factual accuracy benchmarks, GPT-5 makes approximately 80% fewer factual errors than o3, making it substantially more trustworthy for enterprise applications according to Jakub Pachocki, OpenAI’s chief scientist.

11. What about memory and personalization?

GPT-5 supposedly will offer better persistent memory across sessions, remembering facts, preferences, and instructions across multiple conversations, even if you close the app and open a new tab days later. GPT-4’s memory was limited, especially days after a session paused.

The company said you can now set long-term objectives (e.g. help me lose 10 pounds in a healthy way, or help me prepare for my physics test), and GPT-5 will adapt its responses accordingly to proactively align with your goals.

12. How private is my personal data?

Altman previously acknowledged that OpenAI might have to hand over a user’s personal data to the government if legally required to do so.

13. Do I need to switch between different models anymore?

Not anymore—unless you want to generate video via Sora. With GPT-5’s launch, OpenAI expressed confidence in deprecating all previous models.

The company designed GPT-5 to handle all use cases that previously required specialized models, though users can still choose between GPT-5, GPT-5 mini, and GPT-5 nano based on speed and cost requirements.

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.

Source: https://decrypt.co/334102/gpt-5-cheat-sheet-13-things-openai-latest-ai-leap