Gemini 3.1 Pro Preview pricing
Gemini 3.1 Pro Preview combines a large context window with multiple pricing modes, so the details matter if you expect long prompts or batch traffic.
Useful when you are comparing large-context workloads and need a quick read on standard versus batch economics.
How teams usually use this model
Best fit for
- long-document analysis
- batch-heavy pipelines
- large-context assistants
Decision guide
Use Gemini 3.1 Pro Preview when long-context work is part of the product and you need to judge whether batch savings offset the higher-capability tier.
Go deeper without leaving the site
Turn a rough token estimate into per-call and monthly spend for this model choice.
Scan the broader pricing table when you want this model in a wider provider context.
Review provider-level trade-offs before you choose the default model for a feature or plan.
Current tier breakdown
Use this table when the provider offers multiple service or context tiers and you need something more precise than one headline number.
| Service tier | Context tier | Input / 1M | Cached / 1M | Output / 1M |
|---|---|---|---|---|
| Standard | Short | $2.00 | $0.20 | $12.00 |
| Standard | Long | $4.00 | $0.40 | $18.00 |
| Batch | Short | $1.00 | $0.20 | $6.00 |
| Batch | Long | $2.00 | $0.40 | $9.00 |
Official reference
Tool Canopy keeps this page tied to the provider source so you can check the original pricing context when an update matters.
Open provider source2026-03-16
This page reflects the latest repo dataset snapshot for Google Gemini / Gemini 3.1 Pro Preview.
Nearby pricing options to compare next
Use these adjacent pages when you need a faster alternative, a stronger model, or a better cost baseline before locking in a production default.
Gemini 3.1 Pro Preview API pricing
Review Gemini 3.1 Pro Preview API pricing, large-context tiers, and batch costs in one place.
Gemini 3.1 Flash-Lite Preview pricing
Review Gemini 3.1 Flash-Lite Preview token pricing and batch options for lower-cost traffic.
OpenAI GPT-5.4 Pro pricing
See GPT-5.4 Pro token pricing, context limits, and tier details for higher-end workloads.
Latest change signals
Input price: $0 -> $2 per 1M tokens
Added new model to tracked pricing dataset.
Common questions about this pricing page
How much does this model cost right now?
Gemini 3.1 Pro Preview is currently listed at $2.00 input, $12.00 output, and $0.20 cached input per 1M tokens, with a 1,048,576 tokens context window. If the provider offers multiple tiers, use the tier table below for the more exact split.
When is this model usually the better fit?
Use Gemini 3.1 Pro Preview when long-context work is part of the product and you need to judge whether batch savings offset the higher-capability tier. Teams usually open this page for long-document analysis, batch-heavy pipelines, large-context assistants, then decide whether to move up to a stronger option or down to a cheaper default.
What should I check before I ship with this model?
Before shipping, check the tier split, whether cached-input pricing changes your workload math, and how this model looks on the compare page and pricing tracker. If you still need context, compare it against Gemini 3.1 Pro Preview API pricing and Gemini 3.1 Flash-Lite Preview pricing and then run the token calculator for a monthly estimate.