chore: Make README code blocks more easily copy pastable (#1420)

# What does this PR do?
When going through READMEs, I found that I had to keep editing the code
blocks since they were prefixed with `$ `. A common pattern is to triple
click (highlight all) a block and then copy paste. This minor change
will make this easier for folks to follow the READMEs.

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan
N/A

[//]: # (## Documentation)
This commit is contained in:
Ellis Tarn 2025-03-05 09:11:01 -08:00 committed by GitHub
parent 3fabe076cd
commit 24a27baf7c
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
3 changed files with 26 additions and 26 deletions

View file

@ -64,10 +64,10 @@ You can install `uv` by following this [guide](https://docs.astral.sh/uv/getting
You can install the dependencies by running: You can install the dependencies by running:
```bash ```bash
$ cd llama-stack cd llama-stack
$ uv sync --extra dev uv sync --extra dev
$ uv pip install -e . uv pip install -e .
$ source .venv/bin/activate source .venv/bin/activate
``` ```
Note that you can create a dotenv file `.env` that includes necessary environment variables: Note that you can create a dotenv file `.env` that includes necessary environment variables:
@ -80,7 +80,7 @@ LLAMA_STACK_CONFIG=
And then use this dotenv file when running client SDK tests via the following: And then use this dotenv file when running client SDK tests via the following:
```bash ```bash
$ uv run --env-file .env -- pytest -v tests/api/inference/test_text_inference.py uv run --env-file .env -- pytest -v tests/api/inference/test_text_inference.py
``` ```
## Pre-commit Hooks ## Pre-commit Hooks
@ -88,7 +88,7 @@ $ uv run --env-file .env -- pytest -v tests/api/inference/test_text_inference.py
We use [pre-commit](https://pre-commit.com/) to run linting and formatting checks on your code. You can install the pre-commit hooks by running: We use [pre-commit](https://pre-commit.com/) to run linting and formatting checks on your code. You can install the pre-commit hooks by running:
```bash ```bash
$ uv run pre-commit install uv run pre-commit install
``` ```
After that, pre-commit hooks will run automatically before each commit. After that, pre-commit hooks will run automatically before each commit.
@ -96,7 +96,7 @@ After that, pre-commit hooks will run automatically before each commit.
Alternatively, if you don't want to install the pre-commit hooks, you can run the checks manually by running: Alternatively, if you don't want to install the pre-commit hooks, you can run the checks manually by running:
```bash ```bash
$ uv run pre-commit run --all-files uv run pre-commit run --all-files
``` ```
> [!CAUTION] > [!CAUTION]
@ -107,8 +107,8 @@ $ uv run pre-commit run --all-files
To add a new dependency to the project, you can use the `uv` command. For example, to add `foo` to the project, you can run: To add a new dependency to the project, you can use the `uv` command. For example, to add `foo` to the project, you can run:
```bash ```bash
$ uv add foo uv add foo
$ uv sync uv sync
``` ```
## Coding Style ## Coding Style
@ -127,11 +127,11 @@ Building a stack image (conda / docker) will use the production version of the `
Example: Example:
```bash ```bash
$ cd work/ cd work/
$ git clone https://github.com/meta-llama/llama-stack.git git clone https://github.com/meta-llama/llama-stack.git
$ git clone https://github.com/meta-llama/llama-stack-client-python.git git clone https://github.com/meta-llama/llama-stack-client-python.git
$ cd llama-stack cd llama-stack
$ LLAMA_STACK_DIR=$(pwd) LLAMA_STACK_CLIENT_DIR=../llama-stack-client-python llama stack build --template <...> LLAMA_STACK_DIR=$(pwd) LLAMA_STACK_CLIENT_DIR=../llama-stack-client-python llama stack build --template <...>
``` ```
@ -144,14 +144,14 @@ If you have made changes to a provider's configuration in any form (introducing
If you are making changes to the documentation at [https://llama-stack.readthedocs.io/en/latest/](https://llama-stack.readthedocs.io/en/latest/), you can use the following command to build the documentation and preview your changes. You will need [Sphinx](https://www.sphinx-doc.org/en/master/) and the readthedocs theme. If you are making changes to the documentation at [https://llama-stack.readthedocs.io/en/latest/](https://llama-stack.readthedocs.io/en/latest/), you can use the following command to build the documentation and preview your changes. You will need [Sphinx](https://www.sphinx-doc.org/en/master/) and the readthedocs theme.
```bash ```bash
$ cd llama-stack/docs cd llama-stack/docs
$ uv sync --extra docs uv sync --extra docs
# This rebuilds the documentation pages. # This rebuilds the documentation pages.
$ uv run make html uv run make html
# This will start a local server (usually at http://127.0.0.1:8000) that automatically rebuilds and refreshes when you make changes to the documentation. # This will start a local server (usually at http://127.0.0.1:8000) that automatically rebuilds and refreshes when you make changes to the documentation.
$ uv run sphinx-autobuild source build/html --write-all uv run sphinx-autobuild source build/html --write-all
``` ```
### Update API Documentation ### Update API Documentation
@ -159,8 +159,8 @@ $ uv run sphinx-autobuild source build/html --write-all
If you modify or add new API endpoints, update the API documentation accordingly. You can do this by running the following command: If you modify or add new API endpoints, update the API documentation accordingly. You can do this by running the following command:
```bash ```bash
$ uv sync --extra dev uv sync --extra dev
$ uv run ./docs/openapi_generator/run_openapi_generator.sh uv run ./docs/openapi_generator/run_openapi_generator.sh
``` ```
The generated API documentation will be available in `docs/_static/`. Make sure to review the changes before committing. The generated API documentation will be available in `docs/_static/`. Make sure to review the changes before committing.

View file

@ -17,7 +17,7 @@ llama stack run together
2. (Optional) Register datasets and eval tasks as resources. If you want to run pre-configured evaluation flows (e.g. Evaluations (Generation + Scoring) Page). 2. (Optional) Register datasets and eval tasks as resources. If you want to run pre-configured evaluation flows (e.g. Evaluations (Generation + Scoring) Page).
```bash ```bash
$ llama-stack-client datasets register \ llama-stack-client datasets register \
--dataset-id "mmlu" \ --dataset-id "mmlu" \
--provider-id "huggingface" \ --provider-id "huggingface" \
--url "https://huggingface.co/datasets/llamastack/evals" \ --url "https://huggingface.co/datasets/llamastack/evals" \
@ -26,7 +26,7 @@ $ llama-stack-client datasets register \
``` ```
```bash ```bash
$ llama-stack-client benchmarks register \ llama-stack-client benchmarks register \
--eval-task-id meta-reference-mmlu \ --eval-task-id meta-reference-mmlu \
--provider-id meta-reference \ --provider-id meta-reference \
--dataset-id mmlu \ --dataset-id mmlu \

View file

@ -20,10 +20,10 @@ dependencies. Below is the full configuration:
```bash ```bash
$ cd llama-stack cd llama-stack
$ uv sync --extra dev --extra test uv sync --extra dev --extra test
$ uv pip install -e . uv pip install -e .
$ source .venv/bin/activate source .venv/bin/activate
``` ```
## Common options ## Common options