forked from phoenix-oss/llama-stack-mirror
# What does this PR do? NVIDIA Inference provider was using the ModelRegistryHelper to map input model ids to provider model ids. this updates it to use the model_store. ## Test Plan `LLAMA_STACK_CONFIG=http://localhost:8321 uv run pytest -v tests/integration/inference/{test_embedding.py,test_text_inference.py,test_openai_completion.py} --embedding-model nvidia/llama-3.2-nv-embedqa-1b-v2 --text-model=meta-llama/Llama-3.1-70B-Instruct` |
||
|---|---|---|
| .. | ||
| apis | ||
| cli | ||
| distribution | ||
| models | ||
| providers | ||
| strong_typing | ||
| templates | ||
| __init__.py | ||
| env.py | ||
| log.py | ||
| schema_utils.py | ||