MCP is a protocol that allows AI assistants to interact with external tools and services. The HopX MCP server provides a standardized way for AI assistants to execute code safely in isolated environments.
What You’ll Learn
In this guide, you’ll learn how to:- Install the HopX MCP server using
uvx - Configure MCP server in Cursor, VS Code, Claude Desktop, and other IDEs
- Enable AI assistants to execute code in isolated sandboxes
- Use different execution modes (isolated, persistent, rich, background)
- Set up multi-step workflows and rich output capture
- Understand security and best practices for MCP integration
Prerequisites
Before you begin, make sure you have:- Python 3.14+ installed (required for
uvx) uvxinstalled (get it from uv’s documentation)- HopX API key from hopx.ai
- MCP-compatible IDE (Cursor, VS Code, Claude Desktop, Windsurf, etc.)
- Basic familiarity with your IDE’s configuration files
Quick Start
The easiest way to get started:- Install the MCP server:
uvx HopX-mcp - Get your API key from hopx.ai
- Configure your IDE with the MCP server
- Your AI assistant can now execute code safely
Installation
Install the HopX MCP server usinguvx:
Get Your API key
Sign up at hopx.ai to get your free API key. You’ll need this to configure the MCP server.Configuration
After installing withuvx HopX-mcp, configure your IDE by adding the MCP server configuration:
- Cursor
- VS Code
- Claude Desktop
- Windsurf
- Other IDEs
Add to
.cursor/mcp.json in your project or workspace:your-API key-here with your actual API key from hopx.ai.
What This Enables
With the HopX MCP server, your AI assistant can:- ✅ Execute Python, JavaScript, Bash, and Go in isolated containers
- ✅ Analyze data with pandas, numpy, matplotlib (pre-installed)
- ✅ Test code snippets before you use them in production
- ✅ Process data securely without touching your local system
- ✅ Run system commands safely in isolated environments
- ✅ Install packages and test integrations on-the-fly
Execution Modes
The HopX MCP server provides a unifiedexecute_code() function with multiple execution modes:
isolated(default) - One-shot execution: Creates a new sandbox, executes code, returns output, and auto-destroys. Perfect for quick scripts and data analysis.persistent- Execute in an existing sandbox: Use for multi-step workflows where you need to maintain state between executions.rich- Execute with rich output capture: Automatically captures matplotlib plots, pandas DataFrames, and other visualizations.background- Non-blocking execution: Starts code execution in the background and returns immediately with a process ID.
Advanced Usage
Multi-Step Workflows
For workflows that require multiple steps (e.g., installing packages, then running code), create a persistent sandbox: You: “Install pandas and analyze this data: [1, 2, 3, 4, 5]” Claude: Creates a sandbox, then executes multiple commands:create_sandbox(template_id="code-interpreter")- Creates a persistent sandboxexecute_code(sandbox_id="...", code="pip install pandas", mode="persistent")- Installs packageexecute_code(sandbox_id="...", code="import pandas as pd; ...", mode="persistent")- Runs analysisdelete_sandbox(sandbox_id="...")- Cleans up
Rich Output Capture
For data visualization, usemode="rich" to automatically capture plots and DataFrames:
You: “Create a plot of [1, 2, 3, 4, 5]”
Claude: Uses rich mode to capture the visualization:
Related
- LLM Integration - Integrate OpenAI, Anthropic, and other LLMs
- Quickstart Guide - Get started with HopX sandboxes
- Code Execution - Learn about code execution concepts
- Creating Sandboxes - Understand sandbox creation
Next Steps
Now that you’ve set up MCP integration, explore more:- Learn about code execution: Synchronous Execution and Background Execution
- Explore templates: Listing Templates to find the right environment
- Try rich output capture: Rich Output Capture for data visualization
- Integrate with LLMs: LLM Integration Guide for direct LLM integration
- Review best practices: Cookbooks for real-world examples

