llama-stack-mirror/tests/client-sdk/inference/test_inference.py
Ashwin Bharambe 8de8eb03c8
Update the "InterleavedTextMedia" type (#635)
## What does this PR do?

This is a long-pending change and particularly important to get done
now.

Specifically:
- we cannot "localize" (aka download) any URLs from media attachments
anywhere near our modeling code. it must be done within llama-stack.
- `PIL.Image` is infesting all our APIs via `ImageMedia ->
InterleavedTextMedia` and that cannot be right at all. Anything in the
API surface must be "naturally serializable". We need a standard `{
type: "image", image_url: "<...>" }` which is more extensible
- `UserMessage`, `SystemMessage`, etc. are moved completely to
llama-stack from the llama-models repository.

See https://github.com/meta-llama/llama-models/pull/244 for the
corresponding PR in llama-models.

## Test Plan

```bash
cd llama_stack/providers/tests

pytest -s -v -k "fireworks or ollama or together" inference/test_vision_inference.py
pytest -s -v -k "(fireworks or ollama or together) and llama_3b" inference/test_text_inference.py
pytest -s -v -k chroma memory/test_memory.py \
  --env EMBEDDING_DIMENSION=384 --env CHROMA_DB_PATH=/tmp/foobar

pytest -s -v -k fireworks agents/test_agents.py  \
   --safety-shield=meta-llama/Llama-Guard-3-8B \
   --inference-model=meta-llama/Llama-3.1-8B-Instruct
```

Updated the client sdk (see PR ...), installed the SDK in the same
environment and then ran the SDK tests:

```bash
cd tests/client-sdk
LLAMA_STACK_CONFIG=together pytest -s -v agents/test_agents.py
LLAMA_STACK_CONFIG=ollama pytest -s -v memory/test_memory.py

# this one needed a bit of hacking in the run.yaml to ensure I could register the vision model correctly
INFERENCE_MODEL=llama3.2-vision:latest LLAMA_STACK_CONFIG=ollama pytest -s -v inference/test_inference.py
```
2024-12-17 11:18:31 -08:00

78 lines
2.3 KiB
Python

# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the terms described in the LICENSE file in
# the root directory of this source tree.
import pytest
from llama_stack_client.lib.inference.event_logger import EventLogger
def test_text_chat_completion(llama_stack_client):
# non-streaming
available_models = [
model.identifier
for model in llama_stack_client.models.list()
if model.identifier.startswith("meta-llama")
]
assert len(available_models) > 0
model_id = available_models[0]
response = llama_stack_client.inference.chat_completion(
model_id=model_id,
messages=[
{
"role": "user",
"content": "Hello, world!",
}
],
stream=False,
)
assert len(response.completion_message.content) > 0
# streaming
response = llama_stack_client.inference.chat_completion(
model_id=model_id,
messages=[{"role": "user", "content": "Hello, world!"}],
stream=True,
)
logs = [str(log.content) for log in EventLogger().log(response) if log is not None]
assert len(logs) > 0
assert "Assistant> " in logs[0]
def test_image_chat_completion(llama_stack_client):
available_models = [
model.identifier
for model in llama_stack_client.models.list()
if "vision" in model.identifier.lower()
]
if len(available_models) == 0:
pytest.skip("No vision models available")
model_id = available_models[0]
# non-streaming
message = {
"role": "user",
"content": [
{
"type": "image",
"data": {
"uri": "https://www.healthypawspetinsurance.com/Images/V3/DogAndPuppyInsurance/Dog_CTA_Desktop_HeroImage.jpg"
},
},
{
"type": "text",
"text": "Describe what is in this image.",
},
],
}
response = llama_stack_client.inference.chat_completion(
model_id=model_id,
messages=[message],
stream=False,
)
assert len(response.completion_message.content) > 0
assert (
"dog" in response.completion_message.content.lower()
or "puppy" in response.completion_message.content.lower()
)