mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-04 02:03:44 +00:00
# What does this PR do? adds a new method build_model_alias_with_just_llama_model which is needed for cases like ollama's quantized models which do not really have a repo in hf and an entry in SKU list. ## Test Plan pytest -v -s -m "ollama" llama_stack/providers/tests/inference/test_text_inference.py --------- Co-authored-by: Dinesh Yeduguru <dineshyv@fb.com> |
||
|---|---|---|
| .. | ||
| bedrock | ||
| databricks | ||
| fireworks | ||
| ollama | ||
| sample | ||
| tgi | ||
| together | ||
| vllm | ||
| __init__.py | ||