minor typo and HuggingFace -> Hugging Face (#113)

This commit is contained in:
Mark Sze 2024-09-27 02:48:23 +10:00 committed by GitHub
parent 3ae1597b9b
commit 3c99f08267
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
4 changed files with 7 additions and 7 deletions

View file

@ -3,7 +3,7 @@
The `llama` CLI tool helps you setup and use the Llama toolchain & agentic systems. It should be available on your path after installing the `llama-stack` package.
### Subcommands
1. `download`: `llama` cli tools supports downloading the model from Meta or HuggingFace.
1. `download`: `llama` cli tools supports downloading the model from Meta or Hugging Face.
2. `model`: Lists available models and their properties.
3. `stack`: Allows you to build and run a Llama Stack server. You can read more about this [here](/docs/cli_reference.md#step-3-building-configuring-and-running-llama-stack-servers).
@ -38,7 +38,7 @@ You should see a table like this:
<pre style="font-family: monospace;">
+----------------------------------+------------------------------------------+----------------+
| Model Descriptor | HuggingFace Repo | Context Length |
| Model Descriptor | Hugging Face Repo | Context Length |
+----------------------------------+------------------------------------------+----------------+
| Llama3.1-8B | meta-llama/Llama-3.1-8B | 128K |
+----------------------------------+------------------------------------------+----------------+
@ -112,7 +112,7 @@ llama download --source meta --model-id Prompt-Guard-86M --meta-url META_URL
llama download --source meta --model-id Llama-Guard-3-8B --meta-url META_URL
```
#### Downloading from [Huggingface](https://huggingface.co/meta-llama)
#### Downloading from [Hugging Face](https://huggingface.co/meta-llama)
Essentially, the same commands above work, just replace `--source meta` with `--source huggingface`.
@ -180,7 +180,7 @@ llama model describe -m Llama3.2-3B-Instruct
+-----------------------------+----------------------------------+
| Model | Llama3.2-3B-Instruct |
+-----------------------------+----------------------------------+
| HuggingFace ID | meta-llama/Llama-3.2-3B-Instruct |
| Hugging Face ID | meta-llama/Llama-3.2-3B-Instruct |
+-----------------------------+----------------------------------+
| Description | Llama 3.2 3b instruct model |
+-----------------------------+----------------------------------+

View file

@ -51,7 +51,7 @@ class ModelDescribe(Subcommand):
colored("Model", "white", attrs=["bold"]),
colored(model.descriptor(), "white", attrs=["bold"]),
),
("HuggingFace ID", model.huggingface_repo or "<Not Available>"),
("Hugging Face ID", model.huggingface_repo or "<Not Available>"),
("Description", model.description),
("Context Length", f"{model.max_seq_length // 1024}K tokens"),
("Weights format", model.quantization_format.value),

View file

@ -36,7 +36,7 @@ class ModelList(Subcommand):
def _run_model_list_cmd(self, args: argparse.Namespace) -> None:
headers = [
"Model Descriptor",
"HuggingFace Repo",
"Hugging Face Repo",
"Context Length",
]

View file

@ -20,7 +20,7 @@ from llama_stack.inference.meta_reference.inference import get_provider_impl
MODEL = "Llama3.1-8B-Instruct"
HELPER_MSG = """
This test needs llama-3.1-8b-instruct models.
Please donwload using the llama cli
Please download using the llama cli
llama download --source huggingface --model-id llama3_1_8b_instruct --hf-token <HF_TOKEN>
"""