MCP Multi-Tool Execution with Browserbase
Overview
The External MCP Server tool now supports multi-tool execution with intelligent sequencing, allowing Claude to call multiple MCP tools in sequence to complete complex tasks. This is particularly powerful for browser automation workflows like Browserbase, where multiple steps are needed (create session → navigate → extract → close).
Key Features
- Up to 10 sequential tool calls in a single request
- Intelligent stopping when Claude determines the task is complete
- Real-time progress streaming showing each tool execution
- Comprehensive result accumulation for the smart model
- Configurable iteration limits to prevent runaway execution
Configuration in Frontend
Tool Setup
- Navigate to the Assistant Configuration
- Enable “External MCP Server” tool
- Configure the following fields:
{
// MCP Server URL (include auth params for Browserbase)
"url": "https://server.smithery.ai/@browserbasehq/mcp-browserbase/mcp?api_key=YOUR_KEY&profile=YOUR_PROFILE",
// Max iterations (1-10)
"max_iterations": "5",
// Enable streaming of intermediate results
"stream_intermediate": true,
// Selected tools (* for all, or specific tool names)
"selected_tools": ["*"]
}
Iteration Options
1 iteration- Single tool execution (traditional mode)2 iterations- Simple two-step workflows3 iterations- Default, handles most common patterns5 iterations- Complex multi-step workflows10 iterations- Maximum for very complex automation
Example: Browserbase Web Scraping
Scenario
Extract news headlines from BBC News using Browserbase browser automation.
User Prompt
Navigate to https://www.bbc.com/news and extract the main headlines and their descriptions
Execution Flow
graph TD
A[User Request] --> B[Claude Plans Steps]
B --> C[Tool 1: Create Session]
C --> D[Tool 2: Navigate to BBC]
D --> E[Tool 3: Wait for Load]
E --> F[Tool 4: Extract Headlines]
F --> G[Tool 5: Close Session]
G --> H[Claude Summarizes Results]
H --> I[Smart Model Final Response]
Real-Time Streaming Output
Users see progress as it happens:
🔧 Starting MCP tool execution...
📍 Calling tool: browserbase_session_create
✅ Result: Tool executed successfully
Content: Session created with ID: abc123...
💬 Claude's analysis: Session created successfully. Now navigating to BBC News...
🔄 Iteration 2/5
📍 Calling tool: browserbase_navigate
Arguments: {
"url": "https://www.bbc.com/news",
"session_id": "abc123"
}
✅ Result: Tool executed successfully
Content: Page loaded successfully. Title: "BBC News - Home"...
🔄 Iteration 3/5
📍 Calling tool: browserbase_extract
Arguments: {
"selector": "h2.media__title",
"session_id": "abc123"
}
✅ Result: Tool executed successfully
Content: [
"Global climate summit reaches historic agreement",
"Tech giant announces major AI breakthrough",
"Markets rally on positive economic data"
]...
🎯 Getting final summary...
Results in Answers Dict
The complete extracted content is preserved:
## MCP Tool Execution Results
### Tool 1: browserbase_session_create
Session ID: abc123-def456-ghi789
Status: Active
### Tool 2: browserbase_navigate
URL: https://www.bbc.com/news
Page Title: BBC News - Home
Load Time: 1.2s
### Tool 3: browserbase_extract
Headlines:
- Global climate summit reaches historic agreement
- Tech giant announces major AI breakthrough
- Markets rally on positive economic data
[... full extracted content ...]
## Summary
Successfully extracted 15 headlines from BBC News. The top stories cover climate policy, technology developments, and economic news...
API Integration
Using the Aitana CLI
# Create or update an assistant with external_mcp
aitana assistant create \
--name "Web Scraper" \
--tools external_mcp \
--config '{"external_mcp": {"url": "YOUR_MCP_URL", "max_iterations": "5"}}'
# Call the assistant
aitana assistant call web-scraper \
-p "Go to Reuters.com and extract today's top business news"
Direct API Call
import aiohttp
import json
async def call_mcp_assistant():
config = {
"question": "Navigate to example.com and extract all links",
"emissaryConfig": {
"tools": ["external_mcp"],
"toolConfigs": {
"external_mcp": {
"url": "https://your-mcp-server.com/mcp",
"auth": "Bearer YOUR_TOKEN",
"max_iterations": "5",
"stream_intermediate": True,
"selected_tools": ["*"]
}
},
"model": "gemini-2.5-flash"
}
}
async with aiohttp.ClientSession() as session:
async with session.post(
"http://localhost:1956/vac/assistant/YOUR_ASSISTANT_ID",
json=config,
headers={"Accept": "text/event-stream"}
) as response:
async for line in response.content:
if line:
print(line.decode('utf-8'))
Backend Implementation Details
Flow Through the System
- Frontend Configuration → Sent in
toolConfigs - Tool Orchestrator → Preserves
max_iterationsandstream_intermediate - MCP Client → Extracts settings and passes to Anthropic execution
- Anthropic MCP → Executes multi-tool loop with streaming
- Results Accumulation → All tool outputs combined
- Smart Model → Receives complete context for final response
Key Files Modified
frontend/src/components/tools/ToolSelector.tsx- UI configurationbackend/tools/tool_orchestrator.py- Config transformationbackend/tools/mcp_client.py- Parameter passingbackend/models/anthropic_mcp.py- Multi-tool loop implementation
Best Practices
1. Set Appropriate Iteration Limits
- Simple queries: 1-2 iterations
- Standard workflows: 3 iterations (default)
- Complex automation: 5 iterations
- Only use 10 for exceptional cases
2. Enable Streaming for Visibility
- Always enable
stream_intermediatefor user feedback - Helps debug if something goes wrong
- Shows progress for long-running operations
3. Tool Selection
- Use
["*"]to let Claude choose tools - Or specify exact tools for controlled execution
- Example:
["browserbase_session_create", "browserbase_navigate", "browserbase_extract"]
4. Error Handling
- Claude will stop on unrecoverable errors
- Check streaming output for error messages
- Results include both successes and failures
Common Use Cases
Web Scraping
Create session → Navigate → Wait → Extract → Screenshot → Close
Form Automation
Create session → Navigate → Fill form → Submit → Wait → Extract confirmation → Close
Multi-Page Data Collection
Create session → Navigate page 1 → Extract → Navigate page 2 → Extract → ... → Close
Testing Workflows
Create session → Login → Navigate to feature → Perform action → Verify result → Logout → Close
Troubleshooting
Issue: Tools not executing in sequence
Solution: Check max_iterations is set > 1
Issue: No results in answers dict
Solution: Verify stream_intermediate is not preventing accumulation
Issue: Timeout errors
Solution: Streaming prevents timeouts for operations up to 10 minutes
Issue: Too many iterations
Solution: Claude intelligently stops when complete, but you can lower max_iterations
Performance Considerations
- Each iteration adds 1-3 seconds of planning time
- Tool execution time varies by operation
- Streaming adds minimal overhead
- Results accumulation happens in memory
Security Notes
- Authentication can be in URL params or headers
- Browserbase requires params in URL:
?api_key=XXX&profile=YYY - Standard MCP servers use the
authfield for headers - Never expose credentials in logs or UI
Future Enhancements
- Parallel tool execution for independent operations
- Conditional branching based on results
- Retry logic for failed tools
- Tool result caching between iterations
- Visual workflow builder in UI