Monitor the execution result cache to understand cache performance, optimize your code patterns, and track cache usage over time.
Overview
HopX caches execution results to improve performance for repeated code executions. Cache statistics help you:
- Monitor cache hit rates
- Track cache size and entry counts
- Optimize code patterns for better caching
- Understand cache effectiveness
The cache stores execution results based on code content and environment. Identical code executions with the same environment variables will hit the cache.
Getting Cache Statistics
Retrieve current cache statistics:
from hopx_ai import Sandbox
sandbox = Sandbox.create(template="code-interpreter")
# Get cache statistics
stats = sandbox.cache.stats()
# Cache data is nested under 'cache' key
cache_data = stats['cache']
print(f"Total Hits: {cache_data['total_hits']}")
print(f"Cache Size: {cache_data['size']}")
print(f"Max Size: {cache_data['max_size']}")
print(f"TTL: {cache_data['ttl']}")
print(f"Timestamp: {stats['timestamp']}")
sandbox.kill()
Understanding Cache Metrics
Cache Size and Hits
Monitor cache usage and performance:
from hopx_ai import Sandbox
sandbox = Sandbox.create(template="code-interpreter")
stats = sandbox.cache.stats()
cache_data = stats['cache']
print(f"Total Cache Hits: {cache_data['total_hits']}")
print(f"Current Cache Size: {cache_data['size']}")
print(f"Max Cache Size: {cache_data['max_size']}")
print(f"Cache TTL: {cache_data['ttl']}")
if cache_data['total_hits'] > 100:
print("✅ Cache is being utilized effectively")
else:
print("💡 Consider optimizing code patterns for better caching")
sandbox.kill()
Cache Size
Monitor cache size to ensure it’s not consuming excessive resources:
from hopx_ai import Sandbox
sandbox = Sandbox.create(template="code-interpreter")
stats = sandbox.cache.stats()
cache_data = stats['cache']
cache_size = cache_data['size']
max_size = cache_data['max_size']
print(f"Cache Size: {cache_size}")
print(f"Max Cache Size: {max_size}")
print(f"Cache TTL: {cache_data['ttl']}")
if cache_size > max_size * 0.8:
print("⚠️ Cache size is approaching maximum. Consider clearing if needed.")
elif cache_size > max_size * 0.5:
print("Cache size is moderate.")
else:
print("Cache size is small.")
sandbox.kill()
Track cache performance over time:
from hopx_ai import Sandbox
import time
sandbox = Sandbox.create(template="code-interpreter")
# Run some code to generate cache entries
for i in range(5):
sandbox.run_code(f"print('Execution {i}')")
time.sleep(1)
# Check cache statistics
stats = sandbox.cache.stats()
cache_data = stats['cache']
print("Cache Performance:")
print(f" Total Cache Hits: {cache_data['total_hits']}")
print(f" Current Cache Size: {cache_data['size']}")
print(f" Max Cache Size: {cache_data['max_size']}")
print(f" Cache TTL: {cache_data['ttl']}")
print(f" Timestamp: {stats['timestamp']}")
sandbox.kill()
Cache Statistics Response
The cache statistics response includes:
{
"cache": {
"max_size": 1000,
"size": 0,
"total_hits": 0,
"ttl": "5m0s"
},
"timestamp": "2025-01-27T19:43:55Z"
}
Field Descriptions
cache.max_size: Maximum cache size limit
cache.size: Current cache size
cache.total_hits: Total number of cache hits (cached results reused)
cache.ttl: Time-to-live for cache entries
timestamp: Timestamp when the statistics were retrieved
Use Cases
Optimizing for Cache
Test cache effectiveness with repeated executions:
from hopx_ai import Sandbox
import time
sandbox = Sandbox.create(template="code-interpreter")
# Initial execution (cache miss)
code = "result = sum(range(1000000)); print(f'Sum: {result}')"
sandbox.run_code(code)
# Get initial stats
initial_stats = sandbox.cache.stats()
initial_hits = initial_stats['cache']['total_hits']
print(f"After first execution - Total Hits: {initial_hits}")
# Repeat same code (should hit cache)
sandbox.run_code(code)
# Check stats after repeat
final_stats = sandbox.cache.stats()
final_hits = final_stats['cache']['total_hits']
print(f"After repeat execution - Total Hits: {final_hits}")
if final_hits > initial_hits:
print("✅ Cache hit confirmed - repeated execution used cached result")
else:
print("⚠️ No cache hit - execution may have been different")
sandbox.kill()
Monitoring Cache Growth
Track how cache grows over time:
from hopx_ai import Sandbox
import time
sandbox = Sandbox.create(template="code-interpreter")
# Monitor cache growth
for i in range(10):
# Execute different code each time
sandbox.run_code(f"print('Execution {i}: {i * 10}')")
stats = sandbox.cache.stats()
cache_data = stats['cache']
print(f"After execution {i+1}: Size={cache_data['size']}, Hits={cache_data['total_hits']}")
time.sleep(0.5)
sandbox.kill()
Best Practices
Monitor total hits to understand cache effectiveness. A high number of total hits indicates good cache utilization.
Cache works best with deterministic code. Code that produces the same output for the same input will benefit most from caching.
Cache size can grow over time. Monitor cache size and clear it periodically if it becomes too large.
Environment variables affect caching. Code with different environment variables will have separate cache entries.
API Reference
Python SDK
sandbox.cache.stats(*, timeout=None) - Get cache statistics
JavaScript SDK
sandbox.cache.stats() - Get cache statistics
API Endpoint
- GET
/cache/stats - Get execution cache statistics
Next Steps