llama-stack-mirror/llama_stack/providers/remote/inference/nvidia
Jiayi Ni deffaa9e4e
fix: fix the error type in embedding test case (#3197)
# What does this PR do?
Currently the embedding integration test cases fail due to a
misalignment in the error type. This PR fixes the embedding integration
test by fixing the error type.

## Test Plan

```
pytest -s -v tests/integration/inference/test_embedding.py --stack-config="inference=nvidia" --embedding-model="nvidia/llama-3.2-nv-embedqa-1b-v2" --env NVIDIA_API_KEY={nvidia_api_key} --env NVIDIA_BASE_URL="https://integrate.api.nvidia.com"
```
2025-08-21 16:19:51 -07:00
..
__init__.py add NVIDIA NIM inference adapter (#355) 2024-11-23 15:59:00 -08:00
config.py fix: allow default empty vars for conditionals (#2570) 2025-07-01 14:42:05 +02:00
models.py ci: test safety with starter (#2628) 2025-07-09 16:53:50 +02:00
NVIDIA.md docs: update the docs for NVIDIA Inference provider (#3227) 2025-08-21 15:59:39 -07:00
nvidia.py fix: fix the error type in embedding test case (#3197) 2025-08-21 16:19:51 -07:00
openai_utils.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
utils.py chore(pre-commit): add pre-commit hook to enforce llama_stack logger usage (#3061) 2025-08-20 07:15:35 -04:00