mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-06-28 19:04:19 +00:00
# What does this PR do?
[Provide a short summary of what this PR does and why. Link to relevant
issues if applicable.]
[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])
Based on the client output changed, so the output is incorrect:
458e20702b/src/llama_stack_client/lib/cli/models/models.py (L52)
and
https://github.com/meta-llama/llama-stack/pull/1348#pullrequestreview-2654971315
previous discussion that no need to maintain the output, so remove it.
## Test Plan
[Describe the tests you ran to verify your changes with result
summaries. *Provide clear instructions so the plan can be easily
re-executed.*]
[//]: # (## Documentation)
Signed-off-by: reidliu <reid201711@gmail.com>
Co-authored-by: reidliu <reid201711@gmail.com>
1.3 KiB
1.3 KiB
Remote-Hosted Distributions
Remote-Hosted distributions are available endpoints serving Llama Stack API that you can directly connect to.
Distribution | Endpoint | Inference | Agents | Memory | Safety | Telemetry |
---|---|---|---|---|---|---|
Together | https://llama-stack.together.ai | remote::together | meta-reference | remote::weaviate | meta-reference | meta-reference |
Fireworks | https://llamastack-preview.fireworks.ai | remote::fireworks | meta-reference | remote::weaviate | meta-reference | meta-reference |
Connecting to Remote-Hosted Distributions
You can use llama-stack-client
to interact with these endpoints. For example, to list the available models served by the Fireworks endpoint:
$ pip install llama-stack-client
$ llama-stack-client configure --endpoint https://llamastack-preview.fireworks.ai
$ llama-stack-client models list
Checkout the llama-stack-client-python repo for more details on how to use the llama-stack-client
CLI. Checkout llama-stack-app for examples applications built on top of Llama Stack.