mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-09 03:19:20 +00:00
This keeps the prompt encoding layer in our control (see `chat_completion_request_to_prompt()` method) |
||
|---|---|---|
| .. | ||
| inference | ||
| kvstore | ||
| memory | ||
| telemetry | ||
| __init__.py | ||