A Node.js module for debugging API calls and running Postman collections with automatic authentication.
- API Debugging: Debug API calls with detailed request and response logging
- Postman Collection Runner: Run Postman collections with Newman
- Automatic Authentication: Automatically add authentication headers to API calls
- OpenAI API Integration: Test OpenAI API calls with a simple endpoint
- Web Interface: View debug status and API documentation in a web interface
npm install -g mcp-debugger
npm install mcp-debugger
Create a .env
file in your project root with your API keys:
# API Keys
OPENAI_API_KEY=your_openai_api_key
STRIPE_KEY=your_stripe_key
GITHUB_TOKEN=your_github_token
# Newman Configuration
NEWMAN_TIMEOUT=60000
NEWMAN_ITERATIONS=1
# Server Configuration
PORT=3002
NODE_ENV=development
You can also use the provided .env.example
file as a template:
cp .env.example .env
# Start the server with default options
mcp-debugger
# Start the server with custom port
mcp-debugger --port 3003
# Start the server in production mode
mcp-debugger --env production
# Show help
mcp-debugger --help
const McpDebugger = require('mcp-debugger');
// Create a new instance with default options
const server = new McpDebugger();
// Create a new instance with custom options
const server = new McpDebugger({
port: 3003,
env: 'production',
newmanTimeout: 30000,
newmanIterations: 2
});
// Start the server
server.start()
.then(() => {
console.log('Server started');
})
.catch(err => {
console.error('Failed to start server:', err);
});
// Stop the server
server.stop()
.then(() => {
console.log('Server stopped');
})
.catch(err => {
console.error('Failed to stop server:', err);
});
// Enable debug mode for a URL
server.enableDebug('/api/status');
// Enable debug mode for all URLs
server.enableDebug('*');
// Disable debug mode for a URL
server.disableDebug('/api/status');
// Get debug status
const status = server.getDebugStatus();
console.log(status);
// Get the Express app instance for further customization
const app = server.getApp();
app.get('/custom-endpoint', (req, res) => {
res.json({ message: 'Custom endpoint' });
});
- POST /debug/on: Enable debug mode for a URL
- POST /debug/off: Disable debug mode for a URL
- GET /debug/status: Check which URLs are in debug mode (JSON)
- GET /debug/status/view: View debug status in browser
- GET /api/status: Check API status
- POST /api/run-collection: Run a Postman collection
- POST /test-api: Test OpenAI API integration with a "Hello world" prompt
# Enable debug mode for a specific URL
curl -X POST http://localhost:3002/debug/on \
-H "Content-Type: application/json" \
-d '{
"url": "/api/status"
}'
# Enable debug mode for all URLs (using wildcard)
curl -X POST http://localhost:3002/debug/on \
-H "Content-Type: application/json" \
-d '{
"url": "*"
}'
# Disable debug mode for a URL
curl -X POST http://localhost:3002/debug/off \
-H "Content-Type: application/json" \
-d '{
"url": "/api/status"
}'
# Check debug status
curl http://localhost:3002/debug/status
# Run a Postman collection
curl -X POST http://localhost:3002/api/run-collection \
-H "Content-Type: application/json" \
-d '{
"collectionPath": "./collections/example.json",
"environmentPath": "./collections/example-environment.json"
}'
# Test OpenAI API integration with a "Hello world" prompt
curl -X POST http://localhost:3002/test-api \
-H "Content-Type: application/json"
The application automatically detects which service is being called in your Postman collections and adds the appropriate authentication headers using the API keys from your .env file.
- OpenAI: Uses OPENAI_API_KEY for api.openai.com
- Stripe: Uses STRIPE_KEY for api.stripe.com
- GitHub: Uses GITHUB_TOKEN for api.github.com
When running Postman collections, the application provides comprehensive logging of all requests and responses in the terminal, including:
- Request Details: Method, URL, headers, and body
- Response Details: Status code, response time, headers, and body
- Test Results: All test assertions with pass/fail status
This detailed logging makes it easier to debug API calls and understand the complete request/response cycle.
This project is licensed under the MIT License - see the LICENSE file for details.