Flatten your repo for AI in seconds
Flatten repos. Prompt faster. One click → one GPT-ready file
Free Online & Desktop
meta-llama/llama-4-maverick-17b-128e-instruct
Llama-4-Maverick is a powerful 17B parameter model with 128 experts, designed for a wide range of instruction-following tasks. With a large 131,072 token context window and the ability to generate up to 8,192 tokens in a single completion, this model is well-suited for complex, multi-step prompts.
Supports a 131,072 token context window. Handles Text, Image, Video, Audio, Transcription, Text-to-Speech inputs and outputs. Supports fine-tuning for custom applications.
$0.20/ MTok
$0.60/ MTok
6400 tokens/image
$0.05/1k characters
$0.18/1k tokens
Flatten repos. Prompt faster. One click → one GPT-ready file
Free Online & Desktop