Last updated: 16/04/2025

MistralOfficial Docs

Codestral Mamba

codestral-mamba-2407

active

Codestral Mamba

Our first mamba 2 open source model, Codestral Mamba is a cutting-edge language model specialized for low-latency, high-frequency coding tasks. With a large 262,144 token context window, Codestral Mamba supports a wide range of inputs and outputs including text, image, video, audio, transcription, and text-to-speech.

Supports fine-tuning for custom applications and tool use for advanced automation. Capable of generating structured output formats.

Additional Information

Notes

Maximum context length of 262,144 tokens. Available via API as 'open-codestral-mamba' or 'codestral-mamba-latest'. Released under Apache2 license.

Model Timeline

Launch Date

7/1/2024

Capabilities

Text

Input Pricing

$0.50/ MTok

Context: 262,144 tokens

Output Pricing

$1.50/ MTok

Max tokens: 32,768

Embeddings

Embeddings Pricing

$0.0001/1k tokens

Additional Model Information

Tool Use

Yes

Structured Output

Yes

Reasoning

Yes

Flatten your repo for AI in seconds

Flatten repos. Prompt faster. One click → one GPT-ready file

Free Online & Desktop