Monitor the execution result cache to understand cache performance, optimize your code patterns, and track cache usage over time.
Overview
HopX caches execution results to improve performance for repeated code executions. Cache statistics help you:
- Monitor cache hit rates
- Track cache size and entry counts
- Optimize code patterns for better caching
- Understand cache effectiveness
The cache stores execution results based on code content and environment. Identical code executions with the same environment variables will hit the cache.
Getting Cache Statistics
Retrieve current cache statistics:
from hopx_ai import Sandbox
sandbox = Sandbox.create(template="code-interpreter")
# Get cache statistics
stats = sandbox.cache.stats()
# Cache data is nested under 'cache' key
cache_data = stats['cache']
print(f"Total Hits: {cache_data['total_hits']}")
print(f"Cache Size: {cache_data['size']}")
print(f"Max Size: {cache_data['max_size']}")
print(f"TTL: {cache_data['ttl']}")
print(f"Timestamp: {stats['timestamp']}")
sandbox.kill()
import { Sandbox } from '@hopx-ai/sdk';
const sandbox = await Sandbox.create({
template: 'code-interpreter'
});
// Get cache statistics
const stats = await sandbox.cache.stats();
// Cache data is nested under 'cache' key
const cacheData = stats.cache;
console.log(`Total Hits: ${cacheData.total_hits}`);
console.log(`Cache Size: ${cacheData.size}`);
console.log(`Max Size: ${cacheData.max_size}`);
console.log(`TTL: ${cacheData.ttl}`);
console.log(`Timestamp: ${stats.timestamp}`);
await sandbox.kill();
Understanding Cache Metrics
Cache Size and Hits
Monitor cache usage and performance:
from hopx_ai import Sandbox
sandbox = Sandbox.create(template="code-interpreter")
stats = sandbox.cache.stats()
cache_data = stats['cache']
print(f"Total Cache Hits: {cache_data['total_hits']}")
print(f"Current Cache Size: {cache_data['size']}")
print(f"Max Cache Size: {cache_data['max_size']}")
print(f"Cache TTL: {cache_data['ttl']}")
if cache_data['total_hits'] > 100:
print("✅ Cache is being utilized effectively")
else:
print("💡 Consider optimizing code patterns for better caching")
sandbox.kill()
import { Sandbox } from '@hopx-ai/sdk';
const sandbox = await Sandbox.create({
template: 'code-interpreter'
});
const stats = await sandbox.cache.stats();
const cacheData = stats.cache;
console.log(`Total Cache Hits: ${cacheData.total_hits}`);
console.log(`Current Cache Size: ${cacheData.size}`);
console.log(`Max Cache Size: ${cacheData.max_size}`);
console.log(`Cache TTL: ${cacheData.ttl}`);
if (cacheData.total_hits > 100) {
console.log('✅ Cache is being utilized effectively');
} else {
console.log('💡 Consider optimizing code patterns for better caching');
}
await sandbox.kill();
Cache Size
Monitor cache size to ensure it’s not consuming excessive resources:
from hopx_ai import Sandbox
sandbox = Sandbox.create(template="code-interpreter")
stats = sandbox.cache.stats()
cache_data = stats['cache']
cache_size = cache_data['size']
max_size = cache_data['max_size']
print(f"Cache Size: {cache_size}")
print(f"Max Cache Size: {max_size}")
print(f"Cache TTL: {cache_data['ttl']}")
if cache_size > max_size * 0.8:
print("⚠️ Cache size is approaching maximum. Consider clearing if needed.")
elif cache_size > max_size * 0.5:
print("Cache size is moderate.")
else:
print("Cache size is small.")
sandbox.kill()
import { Sandbox } from '@hopx-ai/sdk';
const sandbox = await Sandbox.create({
template: 'code-interpreter'
});
const stats = await sandbox.cache.stats();
const cacheData = stats.cache;
const cacheSize = cacheData.size;
const maxSize = cacheData.max_size;
console.log(`Cache Size: ${cacheSize}`);
console.log(`Max Cache Size: ${maxSize}`);
console.log(`Cache TTL: ${cacheData.ttl}`);
if (cacheSize > maxSize * 0.8) {
console.log('⚠️ Cache size is approaching maximum. Consider clearing if needed.');
} else if (cacheSize > maxSize * 0.5) {
console.log('Cache size is moderate.');
} else {
console.log('Cache size is small.');
}
await sandbox.kill();
Track cache performance over time:
from hopx_ai import Sandbox
import time
sandbox = Sandbox.create(template="code-interpreter")
# Run some code to generate cache entries
for i in range(5):
sandbox.run_code(f"print('Execution {i}')")
time.sleep(1)
# Check cache statistics
stats = sandbox.cache.stats()
cache_data = stats['cache']
print("Cache Performance:")
print(f" Total Cache Hits: {cache_data['total_hits']}")
print(f" Current Cache Size: {cache_data['size']}")
print(f" Max Cache Size: {cache_data['max_size']}")
print(f" Cache TTL: {cache_data['ttl']}")
print(f" Timestamp: {stats['timestamp']}")
sandbox.kill()
import { Sandbox } from '@hopx-ai/sdk';
const sandbox = await Sandbox.create({
template: 'code-interpreter'
});
// Run some code to generate cache entries
for (let i = 0; i < 5; i++) {
await sandbox.runCode(`print('Execution ${i}')`);
await new Promise(resolve => setTimeout(resolve, 1000));
}
// Check cache statistics
const stats = await sandbox.cache.stats();
const cacheData = stats.cache;
console.log('Cache Performance:');
console.log(` Total Cache Hits: ${cacheData.total_hits}`);
console.log(` Current Cache Size: ${cacheData.size}`);
console.log(` Max Cache Size: ${cacheData.max_size}`);
console.log(` Cache TTL: ${cacheData.ttl}`);
console.log(` Timestamp: ${stats.timestamp}`);
await sandbox.kill();
Cache Statistics Response
The cache statistics response includes:
{
"cache": {
"max_size": 1000,
"size": 0,
"total_hits": 0,
"ttl": "5m0s"
},
"timestamp": "2025-01-27T19:43:55Z"
}
Field Descriptions
cache.max_size: Maximum cache size limit
cache.size: Current cache size
cache.total_hits: Total number of cache hits (cached results reused)
cache.ttl: Time-to-live for cache entries
timestamp: Timestamp when the statistics were retrieved
Use Cases
Optimizing for Cache
Test cache effectiveness with repeated executions:
from hopx_ai import Sandbox
import time
sandbox = Sandbox.create(template="code-interpreter")
# Initial execution (cache miss)
code = "result = sum(range(1000000)); print(f'Sum: {result}')"
sandbox.run_code(code)
# Get initial stats
initial_stats = sandbox.cache.stats()
initial_hits = initial_stats['cache']['total_hits']
print(f"After first execution - Total Hits: {initial_hits}")
# Repeat same code (should hit cache)
sandbox.run_code(code)
# Check stats after repeat
final_stats = sandbox.cache.stats()
final_hits = final_stats['cache']['total_hits']
print(f"After repeat execution - Total Hits: {final_hits}")
if final_hits > initial_hits:
print("✅ Cache hit confirmed - repeated execution used cached result")
else:
print("⚠️ No cache hit - execution may have been different")
sandbox.kill()
import { Sandbox } from '@hopx-ai/sdk';
const sandbox = await Sandbox.create({
template: 'code-interpreter'
});
// Initial execution (cache miss)
const code = "result = sum(range(1000000)); print(f'Sum: {result}')";
await sandbox.runCode(code);
// Get initial stats
const initialStats = await sandbox.cache.stats();
const initialHits = initialStats.cache.total_hits;
console.log(`After first execution - Total Hits: ${initialHits}`);
// Repeat same code (should hit cache)
await sandbox.runCode(code);
// Check stats after repeat
const finalStats = await sandbox.cache.stats();
const finalHits = finalStats.cache.total_hits;
console.log(`After repeat execution - Total Hits: ${finalHits}`);
if (finalHits > initialHits) {
console.log('✅ Cache hit confirmed - repeated execution used cached result');
} else {
console.log('⚠️ No cache hit - execution may have been different');
}
await sandbox.kill();
Monitoring Cache Growth
Track how cache grows over time:
from hopx_ai import Sandbox
import time
sandbox = Sandbox.create(template="code-interpreter")
# Monitor cache growth
for i in range(10):
# Execute different code each time
sandbox.run_code(f"print('Execution {i}: {i * 10}')")
stats = sandbox.cache.stats()
cache_data = stats['cache']
print(f"After execution {i+1}: Size={cache_data['size']}, Hits={cache_data['total_hits']}")
time.sleep(0.5)
sandbox.kill()
import { Sandbox } from '@hopx-ai/sdk';
const sandbox = await Sandbox.create({
template: 'code-interpreter'
});
// Monitor cache growth
for (let i = 0; i < 10; i++) {
// Execute different code each time
await sandbox.runCode(`print('Execution ${i}: ${i * 10}')`);
const stats = await sandbox.cache.stats();
const cacheData = stats.cache;
console.log(`After execution ${i+1}: Size=${cacheData.size}, Hits=${cacheData.total_hits}`);
await new Promise(resolve => setTimeout(resolve, 500));
}
await sandbox.kill();
Best Practices
Monitor total hits to understand cache effectiveness. A high number of total hits indicates good cache utilization.
Cache works best with deterministic code. Code that produces the same output for the same input will benefit most from caching.
Cache size can grow over time. Monitor cache size and clear it periodically if it becomes too large.
Environment variables affect caching. Code with different environment variables will have separate cache entries.
API Reference
Python SDK
sandbox.cache.stats(*, timeout=None) - Get cache statistics
JavaScript SDK
sandbox.cache.stats() - Get cache statistics
API Endpoint
- GET
/cache/stats - Get execution cache statistics
Next Steps