I got this error message and noticed the typo in the message. It
directed the user to run `llama stack build first`, which is not a
valid command.
Signed-off-by: Russell Bryant <rbryant@redhat.com>
I got this error message and tried to the run the command presented
and it didn't work. The model needs to be give with `--model-id`
instead of as a positional argument.
Signed-off-by: Russell Bryant <rbryant@redhat.com>
The first time I ran `llama stack build`, I quickly hit enter at the
first prompt asking for a name, assuming it would use the default
given in the help text. This caused a failure later on that wasn't
very obvious. I was using the `docker` format and a blank name caused
an invalid tag format that failed the image build.
This change adds validation for the `name` parameter to ensure it's
not empty before proceeding.
Signed-off-by: Russell Bryant <rbryant@redhat.com>
Test Plan:
First, start a TGI container with `meta-llama/Llama-Guard-3-8B` model
serving on port 5099. See https://github.com/meta-llama/llama-stack/pull/53 and its
description for how.
Then run llama-stack with the following run config:
```
image_name: safety
docker_image: null
conda_env: safety
apis_to_serve:
- models
- inference
- shields
- safety
api_providers:
inference:
providers:
- remote::tgi
safety:
providers:
- meta-reference
telemetry:
provider_id: meta-reference
config: {}
routing_table:
inference:
- provider_id: remote::tgi
config:
url: http://localhost:5099
api_token: null
hf_endpoint_name: null
routing_key: Llama-Guard-3-8B
safety:
- provider_id: meta-reference
config:
llama_guard_shield:
model: Llama-Guard-3-8B
excluded_categories: []
disable_input_check: false
disable_output_check: false
prompt_guard_shield: null
routing_key: llama_guard
```
Now simply run `python -m llama_stack.apis.safety.client localhost
<port>` and check that the llama_guard shield calls run correctly. (The
injection_shield calls fail as expected since we have not set up a
router for them.)