mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-10-06 04:34:57 +00:00
chore: use empty SecretStr values as default
Better than using SecretStr | None so we centralize the null handling. Signed-off-by: Sébastien Han <seb@redhat.com>
This commit is contained in:
parent
c4cb6aa8d9
commit
4af141292f
51 changed files with 103 additions and 93 deletions
|
@ -17,7 +17,7 @@ AWS S3-based file storage provider for scalable cloud file management with metad
|
|||
| `bucket_name` | `<class 'str'>` | No | | S3 bucket name to store files |
|
||||
| `region` | `<class 'str'>` | No | us-east-1 | AWS region where the bucket is located |
|
||||
| `aws_access_key_id` | `str \| None` | No | | AWS access key ID (optional if using IAM roles) |
|
||||
| `aws_secret_access_key` | `pydantic.types.SecretStr \| None` | No | | AWS secret access key (optional if using IAM roles) |
|
||||
| `aws_secret_access_key` | `<class 'pydantic.types.SecretStr'>` | No | | AWS secret access key (optional if using IAM roles) |
|
||||
| `endpoint_url` | `str \| None` | No | | Custom S3 endpoint URL (for MinIO, LocalStack, etc.) |
|
||||
| `auto_create_bucket` | `<class 'bool'>` | No | False | Automatically create the S3 bucket if it doesn't exist |
|
||||
| `metadata_store` | `utils.sqlstore.sqlstore.SqliteSqlStoreConfig \| utils.sqlstore.sqlstore.PostgresSqlStoreConfig` | No | sqlite | SQL store configuration for file metadata |
|
||||
|
|
|
@ -14,7 +14,7 @@ Anthropic inference provider for accessing Claude models and Anthropic's AI serv
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `api_key` | `pydantic.types.SecretStr \| None` | No | | API key for Anthropic models |
|
||||
| `api_key` | `<class 'pydantic.types.SecretStr'>` | No | | API key for Anthropic models |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
|
|
|
@ -15,8 +15,8 @@ AWS Bedrock inference provider for accessing various AI models through AWS's man
|
|||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `aws_access_key_id` | `str \| None` | No | | The AWS access key to use. Default use environment variable: AWS_ACCESS_KEY_ID |
|
||||
| `aws_secret_access_key` | `pydantic.types.SecretStr \| None` | No | | The AWS secret access key to use. Default use environment variable: AWS_SECRET_ACCESS_KEY |
|
||||
| `aws_session_token` | `pydantic.types.SecretStr \| None` | No | | The AWS session token to use. Default use environment variable: AWS_SESSION_TOKEN |
|
||||
| `aws_secret_access_key` | `<class 'pydantic.types.SecretStr'>` | No | | The AWS secret access key to use. Default use environment variable: AWS_SECRET_ACCESS_KEY |
|
||||
| `aws_session_token` | `<class 'pydantic.types.SecretStr'>` | No | | The AWS session token to use. Default use environment variable: AWS_SESSION_TOKEN |
|
||||
| `region_name` | `str \| None` | No | | The default AWS Region to use, for example, us-west-1 or us-west-2.Default use environment variable: AWS_DEFAULT_REGION |
|
||||
| `profile_name` | `str \| None` | No | | The profile name that contains credentials to use.Default use environment variable: AWS_PROFILE |
|
||||
| `total_max_attempts` | `int \| None` | No | | An integer representing the maximum number of attempts that will be made for a single request, including the initial attempt. Default use environment variable: AWS_MAX_ATTEMPTS |
|
||||
|
|
|
@ -16,7 +16,7 @@ Fireworks AI inference provider for Llama models and other AI models on the Fire
|
|||
|-------|------|----------|---------|-------------|
|
||||
| `allowed_models` | `list[str \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `url` | `<class 'str'>` | No | https://api.fireworks.ai/inference/v1 | The URL for the Fireworks server |
|
||||
| `api_key` | `pydantic.types.SecretStr \| None` | No | | The Fireworks.ai API Key |
|
||||
| `api_key` | `<class 'pydantic.types.SecretStr'>` | No | | The Fireworks.ai API Key |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
|
|
|
@ -14,7 +14,7 @@ Google Gemini inference provider for accessing Gemini models and Google's AI ser
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `api_key` | `pydantic.types.SecretStr \| None` | No | | API key for Gemini models |
|
||||
| `api_key` | `<class 'pydantic.types.SecretStr'>` | No | | API key for Gemini models |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
|
|
|
@ -14,7 +14,7 @@ Groq inference provider for ultra-fast inference using Groq's LPU technology.
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `api_key` | `pydantic.types.SecretStr \| None` | No | | The Groq API key |
|
||||
| `api_key` | `<class 'pydantic.types.SecretStr'>` | No | | The Groq API key |
|
||||
| `url` | `<class 'str'>` | No | https://api.groq.com | The URL for the Groq AI server |
|
||||
|
||||
## Sample Configuration
|
||||
|
|
|
@ -14,7 +14,7 @@ Llama OpenAI-compatible provider for using Llama models with OpenAI API format.
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `api_key` | `pydantic.types.SecretStr \| None` | No | | The Llama API key |
|
||||
| `api_key` | `<class 'pydantic.types.SecretStr'>` | No | | The Llama API key |
|
||||
| `openai_compat_api_base` | `<class 'str'>` | No | https://api.llama.com/compat/v1/ | The URL for the Llama API server |
|
||||
|
||||
## Sample Configuration
|
||||
|
|
|
@ -15,7 +15,7 @@ NVIDIA inference provider for accessing NVIDIA NIM models and AI services.
|
|||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `url` | `<class 'str'>` | No | https://integrate.api.nvidia.com | A base url for accessing the NVIDIA NIM |
|
||||
| `api_key` | `pydantic.types.SecretStr \| None` | No | | The NVIDIA API key, only needed of using the hosted service |
|
||||
| `api_key` | `<class 'pydantic.types.SecretStr'>` | No | | The NVIDIA API key, only needed of using the hosted service |
|
||||
| `timeout` | `<class 'int'>` | No | 60 | Timeout for the HTTP requests |
|
||||
| `append_api_version` | `<class 'bool'>` | No | True | When set to false, the API version will not be appended to the base_url. By default, it is true. |
|
||||
|
||||
|
|
|
@ -14,7 +14,7 @@ OpenAI inference provider for accessing GPT models and other OpenAI services.
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `api_key` | `pydantic.types.SecretStr \| None` | No | | API key for OpenAI models |
|
||||
| `api_key` | `<class 'pydantic.types.SecretStr'>` | No | | API key for OpenAI models |
|
||||
| `base_url` | `<class 'str'>` | No | https://api.openai.com/v1 | Base URL for OpenAI API |
|
||||
|
||||
## Sample Configuration
|
||||
|
|
|
@ -15,7 +15,7 @@ Passthrough inference provider for connecting to any external inference service
|
|||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `url` | `<class 'str'>` | No | | The URL for the passthrough endpoint |
|
||||
| `api_key` | `pydantic.types.SecretStr \| None` | No | | API Key for the passthrouth endpoint |
|
||||
| `api_key` | `<class 'pydantic.types.SecretStr'>` | No | | API Key for the passthrouth endpoint |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
|
|
|
@ -15,7 +15,7 @@ RunPod inference provider for running models on RunPod's cloud GPU platform.
|
|||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `url` | `str \| None` | No | | The URL for the Runpod model serving endpoint |
|
||||
| `api_token` | `pydantic.types.SecretStr \| None` | No | | The API token |
|
||||
| `api_token` | `<class 'pydantic.types.SecretStr'>` | No | | The API token |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
|
|
|
@ -15,7 +15,7 @@ SambaNova inference provider for running models on SambaNova's dataflow architec
|
|||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `url` | `<class 'str'>` | No | https://api.sambanova.ai/v1 | The URL for the SambaNova AI server |
|
||||
| `api_key` | `pydantic.types.SecretStr \| None` | No | | The SambaNova cloud API Key |
|
||||
| `api_key` | `<class 'pydantic.types.SecretStr'>` | No | | The SambaNova cloud API Key |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
|
|
|
@ -16,7 +16,7 @@ Together AI inference provider for open-source models and collaborative AI devel
|
|||
|-------|------|----------|---------|-------------|
|
||||
| `allowed_models` | `list[str \| None` | No | | List of models that should be registered with the model registry. If None, all models are allowed. |
|
||||
| `url` | `<class 'str'>` | No | https://api.together.xyz/v1 | The URL for the Together AI server |
|
||||
| `api_key` | `pydantic.types.SecretStr \| None` | No | | The Together AI API Key |
|
||||
| `api_key` | `<class 'pydantic.types.SecretStr'>` | No | | The Together AI API Key |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
|
|
|
@ -16,7 +16,7 @@ Remote vLLM inference provider for connecting to vLLM servers.
|
|||
|-------|------|----------|---------|-------------|
|
||||
| `url` | `str \| None` | No | | The URL for the vLLM model serving endpoint |
|
||||
| `max_tokens` | `<class 'int'>` | No | 4096 | Maximum number of tokens to generate. |
|
||||
| `api_token` | `pydantic.types.SecretStr \| None` | No | ********** | The API token |
|
||||
| `api_token` | `<class 'pydantic.types.SecretStr'>` | No | | The API token |
|
||||
| `tls_verify` | `bool \| str` | No | True | Whether to verify TLS certificates. Can be a boolean or a path to a CA certificate file. |
|
||||
| `refresh_models` | `<class 'bool'>` | No | False | Whether to refresh models periodically |
|
||||
|
||||
|
|
|
@ -15,7 +15,7 @@ IBM WatsonX inference provider for accessing AI models on IBM's WatsonX platform
|
|||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `url` | `<class 'str'>` | No | https://us-south.ml.cloud.ibm.com | A base url for accessing the watsonx.ai |
|
||||
| `api_key` | `pydantic.types.SecretStr \| None` | No | | The watsonx API key |
|
||||
| `api_key` | `<class 'pydantic.types.SecretStr'>` | No | | The watsonx API key |
|
||||
| `project_id` | `str \| None` | No | | The Project ID key |
|
||||
| `timeout` | `<class 'int'>` | No | 60 | Timeout for the HTTP requests |
|
||||
|
||||
|
|
|
@ -15,8 +15,8 @@ AWS Bedrock safety provider for content moderation using AWS's safety services.
|
|||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `aws_access_key_id` | `str \| None` | No | | The AWS access key to use. Default use environment variable: AWS_ACCESS_KEY_ID |
|
||||
| `aws_secret_access_key` | `pydantic.types.SecretStr \| None` | No | | The AWS secret access key to use. Default use environment variable: AWS_SECRET_ACCESS_KEY |
|
||||
| `aws_session_token` | `pydantic.types.SecretStr \| None` | No | | The AWS session token to use. Default use environment variable: AWS_SESSION_TOKEN |
|
||||
| `aws_secret_access_key` | `<class 'pydantic.types.SecretStr'>` | No | | The AWS secret access key to use. Default use environment variable: AWS_SECRET_ACCESS_KEY |
|
||||
| `aws_session_token` | `<class 'pydantic.types.SecretStr'>` | No | | The AWS session token to use. Default use environment variable: AWS_SESSION_TOKEN |
|
||||
| `region_name` | `str \| None` | No | | The default AWS Region to use, for example, us-west-1 or us-west-2.Default use environment variable: AWS_DEFAULT_REGION |
|
||||
| `profile_name` | `str \| None` | No | | The profile name that contains credentials to use.Default use environment variable: AWS_PROFILE |
|
||||
| `total_max_attempts` | `int \| None` | No | | An integer representing the maximum number of attempts that will be made for a single request, including the initial attempt. Default use environment variable: AWS_MAX_ATTEMPTS |
|
||||
|
|
|
@ -15,7 +15,7 @@ SambaNova's safety provider for content moderation and safety filtering.
|
|||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `url` | `<class 'str'>` | No | https://api.sambanova.ai/v1 | The URL for the SambaNova AI server |
|
||||
| `api_key` | `pydantic.types.SecretStr \| None` | No | | The SambaNova cloud API Key |
|
||||
| `api_key` | `<class 'pydantic.types.SecretStr'>` | No | | The SambaNova cloud API Key |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
|
|
|
@ -14,7 +14,7 @@ Braintrust scoring provider for evaluation and scoring using the Braintrust plat
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `openai_api_key` | `pydantic.types.SecretStr \| None` | No | | The OpenAI API Key |
|
||||
| `openai_api_key` | `<class 'pydantic.types.SecretStr'>` | No | | The OpenAI API Key |
|
||||
|
||||
## Sample Configuration
|
||||
|
||||
|
|
|
@ -14,7 +14,7 @@ Bing Search tool for web search capabilities using Microsoft's search engine.
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `api_key` | `pydantic.types.SecretStr \| None` | No | | |
|
||||
| `api_key` | `<class 'pydantic.types.SecretStr'>` | No | | The Bing API key |
|
||||
| `top_k` | `<class 'int'>` | No | 3 | |
|
||||
|
||||
## Sample Configuration
|
||||
|
|
|
@ -14,7 +14,7 @@ Brave Search tool for web search capabilities with privacy-focused results.
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `api_key` | `pydantic.types.SecretStr \| None` | No | | The Brave Search API Key |
|
||||
| `api_key` | `<class 'pydantic.types.SecretStr'>` | No | | The Brave Search API Key |
|
||||
| `max_results` | `<class 'int'>` | No | 3 | The maximum number of results to return |
|
||||
|
||||
## Sample Configuration
|
||||
|
|
|
@ -14,7 +14,7 @@ Tavily Search tool for AI-optimized web search with structured results.
|
|||
|
||||
| Field | Type | Required | Default | Description |
|
||||
|-------|------|----------|---------|-------------|
|
||||
| `api_key` | `pydantic.types.SecretStr \| None` | No | | The Tavily Search API Key |
|
||||
| `api_key` | `<class 'pydantic.types.SecretStr'>` | No | | The Tavily Search API Key |
|
||||
| `max_results` | `<class 'int'>` | No | 3 | The maximum number of results to return |
|
||||
|
||||
## Sample Configuration
|
||||
|
|
|
@ -217,7 +217,7 @@ See [PGVector's documentation](https://github.com/pgvector/pgvector) for more de
|
|||
| `port` | `int \| None` | No | 5432 | |
|
||||
| `db` | `str \| None` | No | postgres | |
|
||||
| `user` | `str \| None` | No | postgres | |
|
||||
| `password` | `pydantic.types.SecretStr \| None` | No | mysecretpassword | |
|
||||
| `password` | `<class 'pydantic.types.SecretStr'>` | No | ********** | |
|
||||
| `kvstore` | `utils.kvstore.config.RedisKVStoreConfig \| utils.kvstore.config.SqliteKVStoreConfig \| utils.kvstore.config.PostgresKVStoreConfig \| utils.kvstore.config.MongoDBKVStoreConfig, annotation=NoneType, required=False, default='sqlite', discriminator='type'` | No | | Config for KV store backend (SQLite only for now) |
|
||||
|
||||
## Sample Configuration
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue