Skip to main content
Snaply uses Large Language Models (LLMs) to understand and transform your text. Different tasks require different capabilities—fixing a typo is simple, but summarizing a technical document requires deep understanding. Snaply lets you choose the right “brain” for the job, balancing speed, intelligence, and computer resources.

Local Models

Local models run entirely on your Mac. They are private, free, and work offline.
  • Fast: The speed demon. Best for grammar checks and quick rewrites where instant results matter more than creative nuance.
  • Every Day Use: The balanced choice. Smart enough for most instructions and summaries, but still lightweight enough to run smoothly.
  • Thinking: A “reasoning” model that pauses to think before answering. Perfect for complex logic, math, or difficult rewrites where accuracy is paramount.
  • Pro: The heavyweight. Professional-grade intelligence that can handle anything. Requires a powerful Mac with ample memory.

Cloud Models

For tasks that require the absolute highest intelligence (like GPT-4 level reasoning) or if you want to save battery life, you can use Cloud Models. Snaply supports Bring Your Own API Key for major providers like Google Gemini, OpenAI, Anthropic, and xAI. These models run on powerful servers but require an internet connection.