Flatten your repo for AI in seconds
Flatten repos. Prompt faster. One click → one GPT-ready file
Free Online & Desktop
open-mixtral-8x22b
Mistral AI's open-source Mixtral 8x22B model is a powerful sparse mixture-of-experts model with a 65,536 token context length, capable of handling a wide range of tasks including text, image, video, audio, transcription, and text-to-speech. This model is being deprecated and will be retired on March 30, 2025, with the recommended alternative being the mistral-small-latest model.
Supports a 65,536 token context window. Handles Text, Image, Video, Audio, Transcription, Text-to-Speech inputs and outputs. Supports fine-tuning for custom applications.
4/1/2024
11/25/2024
11/30/2024
3/30/2025
$-/ KTok
$-/ KTok
$0.0001/1k tokens
Flatten repos. Prompt faster. One click → one GPT-ready file
Free Online & Desktop