mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-08 19:10:56 +00:00
This keeps the prompt encoding layer in our control (see `chat_completion_request_to_prompt()` method) |
||
|---|---|---|
| .. | ||
| agents | ||
| inference | ||
| memory | ||
| safety | ||
| telemetry | ||
| __init__.py | ||