How to Use MCP Server with Cursor AI and Promptster
By Promptster Team · 2026-04-03
If you're using Cursor AI as your primary editor, you already know how powerful it is to have an AI assistant embedded in your development workflow. But there's a gap most developers don't think about: when Cursor generates a prompt for an API call, a system message, or any AI-powered feature, how do you know it's actually the best prompt? How do you know another model wouldn't handle it better, faster, or cheaper?
That's where Promptster's MCP server comes in. You can connect it directly to Cursor and test prompts against multiple AI providers without leaving your editor.
What is MCP?
MCP (Model Context Protocol) is an open standard that lets AI assistants connect to external tools and services. Instead of being limited to what's built into your editor, MCP lets your AI assistant call out to specialized servers for additional capabilities.
Promptster's MCP server exposes 19 tools that let you test prompts, compare models, analyze results, manage saved tests, and more -- all accessible through any MCP-compatible client like Cursor, Claude Code, or Windsurf.
Setting Up Promptster's MCP Server in Cursor
Step 1: Get Your API Key
First, you'll need a Promptster API key. Log into Promptster, navigate to Developer > API Keys in the sidebar, and create a new key. It will start with pk_live_.
Step 2: Configure Cursor
Cursor supports MCP servers natively through its settings. Open your Cursor settings and add the following configuration to your MCP servers:
{
"mcpServers": {
"promptster": {
"url": "https://www.promptster.dev/mcp",
"headers": {
"Authorization": "Bearer pk_live_your_key_here"
}
}
}
}
Replace pk_live_your_key_here with your actual API key.
Step 3: Verify the Connection
After saving the configuration, restart Cursor. You should see Promptster listed as an available MCP server. You can verify by asking Cursor's AI to "list available Promptster tools" -- it should enumerate the 19 tools exposed by the server.
The Development Workflow
Here's how Promptster's MCP integration fits into a real development workflow.
Writing a New AI Feature
Say you're building a customer-facing chatbot and you've written a system prompt. Instead of deploying and hoping for the best, you test it right from Cursor:
- Write your system prompt in your codebase
- Ask Cursor to test it: "Use Promptster to test this system prompt with a sample user query across OpenAI, Anthropic, and Google"
- Review the comparison directly in your editor -- see response quality, latency, and cost for each provider
- Iterate on the prompt based on the results, then test again
The entire loop happens without switching windows. You write code, test prompts, compare results, and refine -- all inside Cursor.
Debugging a Prompt Regression
Your AI feature was working fine last week, but users are reporting lower-quality responses. You suspect a prompt change caused it.
- Ask Cursor: "Use Promptster to compare my current system prompt against the version from last week"
- Run both versions across your target providers
- Use evaluation scoring to quantify the quality difference
- Identify which change caused the regression and fix it
Choosing a Provider for a New Feature
You're adding AI-powered summarization and need to pick a provider. Instead of reading benchmark blogs, you test with your actual content:
- Grab a representative sample of the text you'll be summarizing
- Ask Cursor: "Use Promptster to compare summarization quality across all available providers for this text"
- Review quality scores, response times, and costs
- Make a data-driven provider choice based on your specific use case
Key MCP Tools Available
Here's a subset of the 19 tools you can use through the MCP integration:
| Tool | What It Does |
|---|---|
test_prompt |
Send a prompt to any supported provider and get the response with metadata |
compare_prompts |
Test the same prompt across multiple providers simultaneously |
score_responses |
Run evaluation scoring on responses (relevance, accuracy, completeness, clarity) |
get_history |
Retrieve your recent test history |
list_saved_tests |
Browse your saved test library |
get_saved_test |
Load a specific saved test with full results |
export_data |
Export test data in JSON or CSV format |
The full tool list includes schedule management, notification handling, credit tracking, and more. You can explore all 19 tools in the MCP integration docs.
Tips for Getting the Most Out of This Setup
Create a testing routine. Before committing any prompt change, run a quick comparison across your target providers. This takes seconds and prevents regressions.
Use saved tests as baselines. Save your best-performing prompt/provider combinations. When you iterate, compare new versions against your saved baseline to ensure you're actually improving things.
Leverage evaluation scoring. Don't just eyeball the responses. The evaluation tool scores responses on four dimensions (relevance, accuracy, completeness, clarity), giving you a consistent metric to track across iterations.
Combine with Cursor's inline editing. When Promptster's comparison shows that a prompt performs poorly on one provider, ask Cursor to suggest improvements. Then test the improved version immediately -- the feedback loop is instant.
Beyond Cursor
This same MCP integration works with other MCP-compatible tools. If you also use Claude Code or Windsurf, you can set up Promptster's MCP server in those environments too. The tools and workflow are identical -- only the configuration syntax differs.
For detailed setup instructions for each client, check our MCP integration documentation.
Get Started
If you're already using Cursor, adding Promptster's MCP server takes about two minutes. Grab your API key, paste the config, and you'll have multi-model prompt testing built directly into your editor.
Get your API key and start testing -- your prompts will thank you.