forked from phoenix-oss/llama-stack-mirror
There should be a choke-point for llama3.api imports -- this is the prompt adapter. Creating a ChatFormat() object on demand is inexpensive. The underlying Tokenizer is a singleton anyway. |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| embedding_mixin.py | ||
| model_registry.py | ||
| openai_compat.py | ||
| prompt_adapter.py | ||