mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-14 18:32:37 +00:00
feat: Add allow_listing_models
• Add allow_listing_models configuration flag to VLLM provider to control model listing behavior • Implement allow_listing_models() method across all providers with default implementations in base classes • Prevent HTTP requests to /v1/models endpoint when allow_listing_models=false for improved security and performance • Fix unit tests to include allow_listing_models method in test classes and mock objects
This commit is contained in:
parent
188a56af5c
commit
e9214f9004
15 changed files with 143 additions and 25 deletions
|
|
@ -20,6 +20,7 @@ Remote vLLM inference provider for connecting to vLLM servers.
|
|||
| `api_token` | `str \| None` | No | fake | The API token |
|
||||
| `tls_verify` | `bool \| str` | No | True | Whether to verify TLS certificates. Can be a boolean or a path to a CA certificate file. |
|
||||
| `refresh_models` | `<class 'bool'>` | No | False | Whether to refresh models periodically |
|
||||
| `allow_listing_models` | `<class 'bool'>` | No | True | Whether to allow listing models from the vLLM server |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
|
|
@ -28,4 +29,5 @@ url: ${env.VLLM_URL:=}
|
|||
max_tokens: ${env.VLLM_MAX_TOKENS:=4096}
|
||||
api_token: ${env.VLLM_API_TOKEN:=fake}
|
||||
tls_verify: ${env.VLLM_TLS_VERIFY:=true}
|
||||
allow_listing_models: ${env.VLLM_ALLOW_LISTING_MODELS:=true}
|
||||
```
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue