mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-07-26 22:19:49 +00:00
This flips #2823 and #2805 by making the Stack periodically query the providers for models rather than the providers going behind the back and calling "register" on to the registry themselves. This also adds support for model listing for all other providers via `ModelRegistryHelper`. Once this is done, we do not need to manually list or register models via `run.yaml` and it will remove both noise and annoyance (setting `INFERENCE_MODEL` environment variables, for example) from the new user experience. In addition, it adds a configuration variable `allowed_models` which can be used to optionally restrict the set of models exposed from a provider.
718 B
718 B
remote::fireworks
Description
Fireworks AI inference provider for Llama models and other AI models on the Fireworks platform.
Configuration
Field | Type | Required | Default | Description |
---|---|---|---|---|
allowed_models |
list[str | None |
No | List of models that should be registered with the model registry. If None, all models are allowed. | |
url |
<class 'str'> |
No | https://api.fireworks.ai/inference/v1 | The URL for the Fireworks server |
api_key |
pydantic.types.SecretStr | None |
No | The Fireworks.ai API Key |
Sample Configuration
url: https://api.fireworks.ai/inference/v1
api_key: ${env.FIREWORKS_API_KEY}