Model pricing page

Anthropic Haiku 4.5 pricing

Haiku 4.5 is the leaner Anthropic option in the current dataset and is often the first candidate for high-volume features that still need a modern model.

Why this page

Useful when throughput matters more than premium reasoning and you want a cleaner per-call cost baseline.

Input / 1M
$1.00
Output / 1M
$5.00
Cached input
N/A
Context window
200,000 tokens
Planning context

How teams usually use this model

Best fit for

  • high-volume endpoints
  • internal automation
  • cost-first defaults

Decision guide

Use Haiku 4.5 when the main goal is keeping response cost and latency under control while staying inside the Anthropic stack.

Tier view

Current tier breakdown

Use this table when the provider offers multiple service or context tiers and you need something more precise than one headline number.

Service tierContext tierInput / 1MCached / 1MOutput / 1M
StandardShort$1.00N/A$5.00
Source

Official reference

Tool Canopy keeps this page tied to the provider source so you can check the original pricing context when an update matters.

Open provider source
Last updated

2026-03-16

This page reflects the latest repo dataset snapshot for Anthropic / Haiku 4.5.

Comparison context

Nearby pricing options to compare next

Use these adjacent pages when you need a faster alternative, a stronger model, or a better cost baseline before locking in a production default.

Recent notes

Latest change signals

2026-03-16

Detailed pricing tiers changed.

2026-03-16

Added new model to tracked pricing dataset.

FAQ

Common questions about this pricing page

How much does this model cost right now?

Haiku 4.5 is currently listed at $1.00 input, $5.00 output, and N/A cached input per 1M tokens, with a 200,000 tokens context window. If the provider offers multiple tiers, use the tier table below for the more exact split.

When is this model usually the better fit?

Use Haiku 4.5 when the main goal is keeping response cost and latency under control while staying inside the Anthropic stack. Teams usually open this page for high-volume endpoints, internal automation, cost-first defaults, then decide whether to move up to a stronger option or down to a cheaper default.

What should I check before I ship with this model?

Before shipping, check the tier split, whether cached-input pricing changes your workload math, and how this model looks on the compare page and pricing tracker. If you still need context, compare it against Anthropic Sonnet 4.6 pricing and Anthropic Opus 4.6 pricing and then run the token calculator for a monthly estimate.