Anthropic Haiku 4.5 pricing
Haiku 4.5 is the leaner Anthropic option in the current dataset and is often the first candidate for high-volume features that still need a modern model.
Useful when throughput matters more than premium reasoning and you want a cleaner per-call cost baseline.
How teams usually use this model
Best fit for
- high-volume endpoints
- internal automation
- cost-first defaults
Decision guide
Use Haiku 4.5 when the main goal is keeping response cost and latency under control while staying inside the Anthropic stack.
Go deeper without leaving the site
Turn a rough token estimate into per-call and monthly spend for this model choice.
Scan the broader pricing table when you want this model in a wider provider context.
Review provider-level trade-offs before you choose the default model for a feature or plan.
Current tier breakdown
Use this table when the provider offers multiple service or context tiers and you need something more precise than one headline number.
| Service tier | Context tier | Input / 1M | Cached / 1M | Output / 1M |
|---|---|---|---|---|
| Standard | Short | $1.00 | N/A | $5.00 |
Official reference
Tool Canopy keeps this page tied to the provider source so you can check the original pricing context when an update matters.
Open provider source2026-03-16
This page reflects the latest repo dataset snapshot for Anthropic / Haiku 4.5.
Nearby pricing options to compare next
Use these adjacent pages when you need a faster alternative, a stronger model, or a better cost baseline before locking in a production default.
Anthropic Sonnet 4.6 pricing
Review Sonnet 4.6 token pricing and context limits for cost-sensitive product planning.
Anthropic Opus 4.6 pricing
Review Anthropic Opus 4.6 token pricing and context limits before you commit premium model spend.
Gemini 3.1 Flash-Lite Preview pricing
Review Gemini 3.1 Flash-Lite Preview token pricing and batch options for lower-cost traffic.
Latest change signals
Detailed pricing tiers changed.
Added new model to tracked pricing dataset.
Common questions about this pricing page
How much does this model cost right now?
Haiku 4.5 is currently listed at $1.00 input, $5.00 output, and N/A cached input per 1M tokens, with a 200,000 tokens context window. If the provider offers multiple tiers, use the tier table below for the more exact split.
When is this model usually the better fit?
Use Haiku 4.5 when the main goal is keeping response cost and latency under control while staying inside the Anthropic stack. Teams usually open this page for high-volume endpoints, internal automation, cost-first defaults, then decide whether to move up to a stronger option or down to a cheaper default.
What should I check before I ship with this model?
Before shipping, check the tier split, whether cached-input pricing changes your workload math, and how this model looks on the compare page and pricing tracker. If you still need context, compare it against Anthropic Sonnet 4.6 pricing and Anthropic Opus 4.6 pricing and then run the token calculator for a monthly estimate.