Getting Started with Promptster: Compare AI Models in Minutes
By Promptster Team · 2026-03-04
Choosing the right AI model for your use case can feel overwhelming. With dozens of models across multiple providers — each with different strengths, pricing, and performance characteristics — how do you decide? That's exactly why we built Promptster.
What is Promptster?
Promptster lets you send the same prompt to multiple AI providers simultaneously and compare the results side by side. You can evaluate responses on quality, speed, cost, and token usage — all in real time.
Setting Up Your First Test
Getting started takes less than 5 minutes:
1. Add Your API Keys
Navigate to Provider Keys in the sidebar and add API keys for the providers you want to test. Your keys are encrypted with AES-256 before storage — they never leave your device in plaintext.
Currently supported providers include OpenAI, Anthropic, Google AI, DeepSeek, xAI, Groq, Mistral, Perplexity, Together AI, Cerebras, and Fireworks AI.
2. Choose Your Providers
Head to Run Tests and select the providers you want to compare. You can compare up to 11 providers at once (paid plans), or 2 at a time on the free tier.
3. Write Your Prompt
Enter your prompt in the shared prompt field. For a fair comparison, every provider receives the exact same prompt with the same parameters.
Explain the concept of recursion to a beginner programmer.
Use a real-world analogy and include a simple code example in Python.
4. Analyze Results
After submitting, you'll see results from each provider with:
- Response quality — AI-powered evaluation scoring across relevance, accuracy, completeness, and clarity
- Response time — How fast each provider returned a complete response
- Cost — Real-time cost calculation based on each provider's per-token pricing
- Token usage — Input and output token counts
Advanced Features
Consensus Analysis
After running a comparison, click Consensus Report to generate an AI-powered synthesis of all responses. This identifies areas of agreement, disagreement, and provides a ranked summary.
Scheduled Tests
Set up recurring tests to monitor model performance over time. Promptster will alert you if response quality degrades or latency increases beyond your configured thresholds.
Public API
Integrate prompt testing into your CI/CD pipeline with our REST API. Run regression tests on every pull request to catch quality regressions before they ship.
Tips for Effective Comparisons
- Be specific with your prompts — Vague prompts produce vague results. The more specific your prompt, the easier it is to evaluate quality differences.
- Test multiple categories — A model that excels at code generation might underperform at creative writing. Test across your actual use cases.
- Use evaluation scoring — Enable auto-score in Advanced Settings to automatically evaluate response quality after each comparison.
- Save and version your tests — Save important comparisons and use "Save as New Version" to track how results change when you refine your prompts.
What's Next?
- Explore the API documentation for programmatic access
- Set up scheduled tests for continuous monitoring
- Try the sandbox mode to run 3 free tests without API keys
Happy testing!