llama-stack-mirror/docs/source/providers/inference/remote_together.md
Ashwin Bharambe 1463b79218
feat(registry): make the Stack query providers for model listing (#2862)
This flips #2823 and #2805 by making the Stack periodically query the
providers for models rather than the providers going behind the back and
calling "register" on to the registry themselves. This also adds support
for model listing for all other providers via `ModelRegistryHelper`.
Once this is done, we do not need to manually list or register models
via `run.yaml` and it will remove both noise and annoyance (setting
`INFERENCE_MODEL` environment variables, for example) from the new user
experience.

In addition, it adds a configuration variable `allowed_models` which can
be used to optionally restrict the set of models exposed from a
provider.
2025-07-24 10:39:53 -07:00

689 B

remote::together

Description

Together AI inference provider for open-source models and collaborative AI development.

Configuration

Field Type Required Default Description
allowed_models list[str | None No List of models that should be registered with the model registry. If None, all models are allowed.
url <class 'str'> No https://api.together.xyz/v1 The URL for the Together AI server
api_key pydantic.types.SecretStr | None No The Together AI API Key

Sample Configuration

url: https://api.together.xyz/v1
api_key: ${env.TOGETHER_API_KEY}