Last updated: 16/04/2025

MistralOfficial Docs

Mistral Tiny

mistral-tiny-latest

active

Mistral Tiny

A small but powerful edge model with a large 131,072 token context window, making it suitable for efficient deployment in resource-constrained environments.

Supports a 131K token context window. Handles Text, Image, Video, Audio, Transcription, Text-to-Speech inputs and outputs. Supports fine-tuning for custom applications. Supports tool use for advanced automation. Capable of generating structured output formats.

Additional Information

Notes

Also known as Open Mistral Nemo. Has aliases: open-mistral-nemo, open-mistral-nemo-2407, mistral-tiny-2407. Features a 131,072 token context length.

Model Timeline

Launch Date

7/1/2024

Last Updated

7/1/2024

Capabilities

Text

Input Pricing

$-/ KTok

Context: 131,072 tokens

Output Pricing

$-/ KTok

Max tokens: 131,072

Embeddings

Embeddings Pricing

$0.10/1k tokens

Additional Model Information

Tool Use

Yes

Structured Output

Yes

Reasoning

Yes

Flatten your repo for AI in seconds

Flatten repos. Prompt faster. One click → one GPT-ready file

Free Online & Desktop