Model pricing page

OpenAI GPT-5.4 pricing

This page gives builders a fast reference for OpenAI GPT-5.4 pricing, context limits, and the tier choices that most change margin planning.

Why this page

Useful when you need a quick answer for standard vs batch vs priority pricing before shipping an AI feature.

Input / 1M
$2.50
Output / 1M
$15.00
Cached input
$0.25
Context window
1,048,576 tokens
Planning context

How teams usually use this model

Best fit for

  • customer-facing chat
  • workflow copilots
  • baseline production planning

Decision guide

Use GPT-5.4 when you want a mainstream flagship baseline and need to sanity-check whether a standard tier is good enough before paying for a premium step up.

Tier view

Current tier breakdown

Use this table when the provider offers multiple service or context tiers and you need something more precise than one headline number.

Service tierContext tierInput / 1MCached / 1MOutput / 1M
BatchShort$1.25$0.13$7.50
BatchLong$2.50$0.26$11.25
FlexShort$1.25$0.13$7.50
FlexLong$2.50$0.26$11.25
StandardShort$2.50$0.25$15.00
StandardLong$5.00$0.50$22.50
PriorityShort$5.00$0.50$30.00
Source

Official reference

Tool Canopy keeps this page tied to the provider source so you can check the original pricing context when an update matters.

Open provider source
Last updated

2026-03-16

This page reflects the latest repo dataset snapshot for OpenAI / GPT-5.4.

Comparison context

Nearby pricing options to compare next

Use these adjacent pages when you need a faster alternative, a stronger model, or a better cost baseline before locking in a production default.

Recent notes

Latest change signals

2026-03-16

Added new model to tracked pricing dataset.

FAQ

Common questions about this pricing page

How much does this model cost right now?

GPT-5.4 is currently listed at $2.50 input, $15.00 output, and $0.25 cached input per 1M tokens, with a 1,048,576 tokens context window. If the provider offers multiple tiers, use the tier table below for the more exact split.

When is this model usually the better fit?

Use GPT-5.4 when you want a mainstream flagship baseline and need to sanity-check whether a standard tier is good enough before paying for a premium step up. Teams usually open this page for customer-facing chat, workflow copilots, baseline production planning, then decide whether to move up to a stronger option or down to a cheaper default.

What should I check before I ship with this model?

Before shipping, check the tier split, whether cached-input pricing changes your workload math, and how this model looks on the compare page and pricing tracker. If you still need context, compare it against OpenAI GPT-5.4 Pro pricing and OpenAI GPT-5.4 API pricing and then run the token calculator for a monthly estimate.