llama-stack/llama_stack/templates/vllm-gpu
Ashwin Bharambe 272d3359ee
fix: remove code interpeter implementation (#2087)
# What does this PR do?

The builtin implementation of code interpreter is not robust and has a
really weak sandboxing shell (the `bubblewrap` container). Given the
availability of better MCP code interpreter servers coming up, we should
use them instead of baking an implementation into the Stack and
expanding the vulnerability surface to the rest of the Stack.

This PR only does the removal. We will add examples with how to
integrate with MCPs in subsequent ones.

## Test Plan

Existing tests.
2025-05-01 14:35:08 -07:00
..
__init__.py Update more distribution docs to be simpler and partially codegen'ed 2024-11-20 22:03:44 -08:00
build.yaml fix: remove code interpeter implementation (#2087) 2025-05-01 14:35:08 -07:00
run.yaml fix: remove code interpeter implementation (#2087) 2025-05-01 14:35:08 -07:00
vllm.py fix: remove code interpeter implementation (#2087) 2025-05-01 14:35:08 -07:00