Model pricing page

Gemini 3.1 Pro Preview pricing

Gemini 3.1 Pro Preview combines a large context window with multiple pricing modes, so the details matter if you expect long prompts or batch traffic.

Why this page

Useful when you are comparing large-context workloads and need a quick read on standard versus batch economics.

Input / 1M
$2.00
Output / 1M
$12.00
Cached input
$0.20
Context window
1,048,576 tokens
Planning context

How teams usually use this model

Best fit for

  • long-document analysis
  • batch-heavy pipelines
  • large-context assistants

Decision guide

Use Gemini 3.1 Pro Preview when long-context work is part of the product and you need to judge whether batch savings offset the higher-capability tier.

Tier view

Current tier breakdown

Use this table when the provider offers multiple service or context tiers and you need something more precise than one headline number.

Service tierContext tierInput / 1MCached / 1MOutput / 1M
StandardShort$2.00$0.20$12.00
StandardLong$4.00$0.40$18.00
BatchShort$1.00$0.20$6.00
BatchLong$2.00$0.40$9.00
Source

Official reference

Tool Canopy keeps this page tied to the provider source so you can check the original pricing context when an update matters.

Open provider source
Last updated

2026-03-16

This page reflects the latest repo dataset snapshot for Google Gemini / Gemini 3.1 Pro Preview.

Comparison context

Nearby pricing options to compare next

Use these adjacent pages when you need a faster alternative, a stronger model, or a better cost baseline before locking in a production default.

Recent notes

Latest change signals

2026-03-16

Input price: $0 -> $2 per 1M tokens

2026-03-16

Added new model to tracked pricing dataset.

FAQ

Common questions about this pricing page

How much does this model cost right now?

Gemini 3.1 Pro Preview is currently listed at $2.00 input, $12.00 output, and $0.20 cached input per 1M tokens, with a 1,048,576 tokens context window. If the provider offers multiple tiers, use the tier table below for the more exact split.

When is this model usually the better fit?

Use Gemini 3.1 Pro Preview when long-context work is part of the product and you need to judge whether batch savings offset the higher-capability tier. Teams usually open this page for long-document analysis, batch-heavy pipelines, large-context assistants, then decide whether to move up to a stronger option or down to a cheaper default.

What should I check before I ship with this model?

Before shipping, check the tier split, whether cached-input pricing changes your workload math, and how this model looks on the compare page and pricing tracker. If you still need context, compare it against Gemini 3.1 Pro Preview API pricing and Gemini 3.1 Flash-Lite Preview pricing and then run the token calculator for a monthly estimate.