Test how AI agents interact with your MCP server.
Request
ID of the MCP server to test
Message to send to the AI agent
AI model to use for testing. Options: claude-3
, gpt-4
, gpt-3.5
Default: claude-3
Additional context for the AI agent User ID for resource access
Response
Whether the test completed successfully
List of tools the AI called
List of resources the AI accessed
Test execution time in seconds
Success rate of tool calls (0-1)
curl -X POST https://api.leanmcp.com/v1/test \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"mcp_id": "mcp_abc123def456",
"message": "Send an email to john@example.com saying hello",
"ai_model": "claude-3",
"context": {
"user_id": "user_123"
}
}'
{
"success" : true ,
"data" : {
"ai_response" : "I've sent an email to john@example.com with the subject 'Hello' and a friendly greeting message." ,
"tools_used" : [ "send_email" ],
"resources_accessed" : [ "user://contacts" ],
"execution_time" : 2.3 ,
"success_rate" : 1.0
},
"meta" : {
"request_id" : "req_test_789" ,
"timestamp" : "2023-12-01T12:00:00Z"
}
}
Testing Best Practices
Start Simple
Test basic tool usage first:
{
"mcp_id" : "your-mcp-id" ,
"message" : "What tools do you have available?" ,
"ai_model" : "claude-3"
}
Test Edge Cases
Try confusing or ambiguous requests:
{
"mcp_id" : "your-mcp-id" ,
"message" : "Do that thing with the data" ,
"ai_model" : "claude-3"
}
Test Complex Workflows
Chain multiple tool calls together:
{
"mcp_id" : "your-mcp-id" ,
"message" : "Check my contacts, find John's email, and send him a meeting invite for tomorrow at 2pm" ,
"ai_model" : "claude-3"
}
Interpreting Results
Success Rate
1.0 : All tool calls worked perfectly
0.8-0.9 : Mostly successful, minor issues
0.5-0.7 : Some failures, needs improvement
< 0.5 : Major issues, review tool descriptions
Common Issues
Wrong tools used : Tool descriptions too similar
Missing tools : AI needs tools that don’t exist
Failed calls : Input validation or execution errors
No tools used : AI didn’t understand the request