# What does this PR do?
- as title, as API have been updated
## Test Plan
```
LLAMA_STACK_BASE_URL="http://localhost:5000" pytest -v tests/client-sdk/
```
## Sources
Please link relevant resources if necessary.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
# What does this PR do?
- fixes client sdk tests
## Test Plan
```
LLAMA_STACK_BASE_URL="http://localhost:5000" pytest -v tests/client-sdk/inference/test_inference.py
```
<img width="1359" alt="image"
src="https://github.com/user-attachments/assets/a720e0ca-c441-465e-bc6b-9b98091afa23"
/>
## Sources
Please link relevant resources if necessary.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
Summary:
Part of https://github.com/meta-llama/llama-stack/issues/651
We are adding more tests to the clients sdk for some basic coverage.
Those tests are inspired by the inference provider tests.
Test Plan:
Run tests via the command
```
LLAMA_STACK_CONFIG=llama_stack/templates/fireworks/run.yaml pytest tests/client-sdk/inference -v
```
Example output
```
tests/client-sdk/inference/test_inference.py::test_completion_non_streaming PASSED [ 7%]
tests/client-sdk/inference/test_inference.py::test_completion_streaming PASSED [ 14%]
tests/client-sdk/inference/test_inference.py::test_completion_log_probs_non_streaming SKIPPED (Needs to be fixed) [ 21%]
tests/client-sdk/inference/test_inference.py::test_completion_log_probs_streaming SKIPPED (Needs to be fixed) [ 28%]
tests/client-sdk/inference/test_inference.py::test_completion_structured_output PASSED [ 35%]
tests/client-sdk/inference/test_inference.py::test_text_chat_completion_non_streaming[What are the names of planets in our solar system?-Earth] PASSED [ 42%]
tests/client-sdk/inference/test_inference.py::test_text_chat_completion_non_streaming[What are the names of the planets that have rings around them?-Saturn] PASSED [ 50%]
tests/client-sdk/inference/test_inference.py::test_text_chat_completion_streaming[What's the name of the Sun in latin?-Sol] PASSED [ 57%]
tests/client-sdk/inference/test_inference.py::test_text_chat_completion_streaming[What is the name of the US captial?-Washington] PASSED [ 64%]
tests/client-sdk/inference/test_inference.py::test_text_chat_completion_with_tool_calling_and_non_streaming PASSED [ 71%]
tests/client-sdk/inference/test_inference.py::test_text_chat_completion_with_tool_calling_and_streaming PASSED [ 78%]
tests/client-sdk/inference/test_inference.py::test_text_chat_completion_structured_output PASSED [ 85%]
tests/client-sdk/inference/test_inference.py::test_image_chat_completion_non_streaming PASSED [ 92%]
```
## What does this PR do?
This is a long-pending change and particularly important to get done
now.
Specifically:
- we cannot "localize" (aka download) any URLs from media attachments
anywhere near our modeling code. it must be done within llama-stack.
- `PIL.Image` is infesting all our APIs via `ImageMedia ->
InterleavedTextMedia` and that cannot be right at all. Anything in the
API surface must be "naturally serializable". We need a standard `{
type: "image", image_url: "<...>" }` which is more extensible
- `UserMessage`, `SystemMessage`, etc. are moved completely to
llama-stack from the llama-models repository.
See https://github.com/meta-llama/llama-models/pull/244 for the
corresponding PR in llama-models.
## Test Plan
```bash
cd llama_stack/providers/tests
pytest -s -v -k "fireworks or ollama or together" inference/test_vision_inference.py
pytest -s -v -k "(fireworks or ollama or together) and llama_3b" inference/test_text_inference.py
pytest -s -v -k chroma memory/test_memory.py \
--env EMBEDDING_DIMENSION=384 --env CHROMA_DB_PATH=/tmp/foobar
pytest -s -v -k fireworks agents/test_agents.py \
--safety-shield=meta-llama/Llama-Guard-3-8B \
--inference-model=meta-llama/Llama-3.1-8B-Instruct
```
Updated the client sdk (see PR ...), installed the SDK in the same
environment and then ran the SDK tests:
```bash
cd tests/client-sdk
LLAMA_STACK_CONFIG=together pytest -s -v agents/test_agents.py
LLAMA_STACK_CONFIG=ollama pytest -s -v memory/test_memory.py
# this one needed a bit of hacking in the run.yaml to ensure I could register the vision model correctly
INFERENCE_MODEL=llama3.2-vision:latest LLAMA_STACK_CONFIG=ollama pytest -s -v inference/test_inference.py
```
# What does this PR do?
**Why**
- Clean up examples which we will not maintain; reduce the surface area
to the minimal showcases
**What**
- Delete `client.py` in /apis/*
- Move all scripts to unit tests
- SDK sync in the future will just require running pytests
**Side notes**
- `bwrap` not available on Mac so code_interpreter will not work
## Test Plan
```
LLAMA_STACK_BASE_URL=http://localhost:5000 pytest -v ./tests/client-sdk
```
<img width="725" alt="image"
src="https://github.com/user-attachments/assets/36bfe537-628d-43c3-8479-dcfcfe2e4035"
/>
## Sources
Please link relevant resources if necessary.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.