llama-stack-mirror/llama_stack/providers/remote/inference/nvidia
Matthew Farrellee 13aa367c8a
fix: default api_key from env must be a SecretStr (#2565)
# What does this PR do?

fixes the api_key type when read from env

## Test Plan

run nvidia template w/o api_key in run.yaml and perform inference

before change the inference will fail w/ -

```
  File ".../llama-stack/llama_stack/providers/remote/inference/nvidia/nvidia.py", line 118, in _get_client_for_base_url
    api_key=(self._config.api_key.get_secret_value() if self._config.api_key else "NO KEY"),
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'str' object has no attribute 'get_secret_value'
```
2025-06-30 18:08:44 -07:00
..
__init__.py add NVIDIA NIM inference adapter (#355) 2024-11-23 15:59:00 -08:00
config.py fix: default api_key from env must be a SecretStr (#2565) 2025-06-30 18:08:44 -07:00
models.py chore: add meta/llama-3.3-70b-instruct as supported nvidia inference provider model (#1985) 2025-04-17 06:50:40 -07:00
NVIDIA.md docs: Add NVIDIA platform distro docs (#1971) 2025-04-17 05:54:30 -07:00
nvidia.py chore: remove nested imports (#2515) 2025-06-26 08:01:05 +05:30
openai_utils.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
utils.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00