From 40cd48fa0917860ba0a2500efaf862cdc7861d73 Mon Sep 17 00:00:00 2001 From: Reid <61492567+reidliu41@users.noreply.github.com> Date: Sat, 8 Mar 2025 02:39:33 +0800 Subject: [PATCH] chore: remove the incorrect output (#1472) # What does this PR do? [Provide a short summary of what this PR does and why. Link to relevant issues if applicable.] [//]: # (If resolving an issue, uncomment and update the line below) [//]: # (Closes #[issue-number]) Based on the client output changed, so the output is incorrect: https://github.com/meta-llama/llama-stack-client-python/blob/458e20702b5aa8f435ac5ce114fee9252b751d25/src/llama_stack_client/lib/cli/models/models.py#L52 and https://github.com/meta-llama/llama-stack/pull/1348#pullrequestreview-2654971315 previous discussion that no need to maintain the output, so remove it. ## Test Plan [Describe the tests you ran to verify your changes with result summaries. *Provide clear instructions so the plan can be easily re-executed.*] [//]: # (## Documentation) Signed-off-by: reidliu Co-authored-by: reidliu --- .../remote_hosted_distro/index.md | 22 ------------------- 1 file changed, 22 deletions(-) diff --git a/docs/source/distributions/remote_hosted_distro/index.md b/docs/source/distributions/remote_hosted_distro/index.md index 2fbe381af..ef5a83d8a 100644 --- a/docs/source/distributions/remote_hosted_distro/index.md +++ b/docs/source/distributions/remote_hosted_distro/index.md @@ -17,26 +17,4 @@ $ llama-stack-client configure --endpoint https://llamastack-preview.fireworks.a $ llama-stack-client models list ``` -You will see outputs: -``` -$ llama-stack-client models list -+------------------------------+------------------------------+---------------+------------+ -| identifier | llama_model | provider_id | metadata | -+==============================+==============================+===============+============+ -| Llama3.1-8B-Instruct | Llama3.1-8B-Instruct | fireworks0 | {} | -+------------------------------+------------------------------+---------------+------------+ -| Llama3.1-70B-Instruct | Llama3.1-70B-Instruct | fireworks0 | {} | -+------------------------------+------------------------------+---------------+------------+ -| Llama3.1-405B-Instruct | Llama3.1-405B-Instruct | fireworks0 | {} | -+------------------------------+------------------------------+---------------+------------+ -| Llama3.2-1B-Instruct | Llama3.2-1B-Instruct | fireworks0 | {} | -+------------------------------+------------------------------+---------------+------------+ -| Llama3.2-3B-Instruct | Llama3.2-3B-Instruct | fireworks0 | {} | -+------------------------------+------------------------------+---------------+------------+ -| Llama3.2-11B-Vision-Instruct | Llama3.2-11B-Vision-Instruct | fireworks0 | {} | -+------------------------------+------------------------------+---------------+------------+ -| Llama3.2-90B-Vision-Instruct | Llama3.2-90B-Vision-Instruct | fireworks0 | {} | -+------------------------------+------------------------------+---------------+------------+ -``` - Checkout the [llama-stack-client-python](https://github.com/meta-llama/llama-stack-client-python/blob/main/docs/cli_reference.md) repo for more details on how to use the `llama-stack-client` CLI. Checkout [llama-stack-app](https://github.com/meta-llama/llama-stack-apps/tree/main) for examples applications built on top of Llama Stack.