forked from phoenix-oss/llama-stack-mirror
# What does this PR do? NVIDIA Inference provider was using the ModelRegistryHelper to map input model ids to provider model ids. this updates it to use the model_store. ## Test Plan `LLAMA_STACK_CONFIG=http://localhost:8321 uv run pytest -v tests/integration/inference/{test_embedding.py,test_text_inference.py,test_openai_completion.py} --embedding-model nvidia/llama-3.2-nv-embedqa-1b-v2 --text-model=meta-llama/Llama-3.1-70B-Instruct` |
||
---|---|---|
.. | ||
__init__.py | ||
config.py | ||
models.py | ||
NVIDIA.md | ||
nvidia.py | ||
openai_utils.py | ||
utils.py |