Updated Test Chat Completion section for step 4 of method 3

This commit is contained in:
Omar Abdelwahab 2025-10-08 13:54:59 -07:00
parent 2f3dfee535
commit 4afa23dbff

View file

@ -322,19 +322,30 @@ llama-stack-client providers list
``` ```
#### Test Chat Completion #### Test Chat Completion
Verify with the client (recommended):
```bash ```bash
# Basic HTTP test # Verify providers are configured correctly (recommended first step)
uv run --with llama-stack-client llama-stack-client providers list
# Test chat completion using the client
uv run --with llama-stack-client llama-stack-client inference chat-completion \
--model llama3.1:8b \
--message "Hello!"
# Alternative if you have llama-stack-client installed
llama-stack-client providers list
llama-stack-client inference chat-completion \
--model llama3.1:8b \
--message "Hello!"
# Or using basic HTTP test
curl -X POST http://localhost:8321/v1/chat/completions \ curl -X POST http://localhost:8321/v1/chat/completions \
-H "Content-Type: application/json" \ -H "Content-Type: application/json" \
-d '{ -d '{
"model": "llama3.1:8b", "model": "llama3.1:8b",
"messages": [{"role": "user", "content": "Hello!"}] "messages": [{"role": "user", "content": "Hello!"}]
}' }'
# Or using the client (more robust)
uv run --with llama-stack-client llama-stack-client inference chat-completion \
--model llama3.1:8b \
--message "Hello!"
``` ```
## Configuration Management ## Configuration Management