forked from phoenix-oss/llama-stack-mirror
There should be a choke-point for llama3.api imports -- this is the prompt adapter. Creating a ChatFormat() object on demand is inexpensive. The underlying Tokenizer is a singleton anyway. |
||
|---|---|---|
| .. | ||
| bedrock | ||
| common | ||
| datasetio | ||
| inference | ||
| kvstore | ||
| memory | ||
| scoring | ||
| telemetry | ||
| __init__.py | ||