forked from phoenix-oss/llama-stack-mirror
# What does this PR do? TLDR: Changes needed to get 100% passing tests for OpenAI API verification tests when run against Llama Stack with the `together`, `fireworks`, and `openai` providers. And `groq` is better than before, at 88% passing. This cleans up the OpenAI API support for image message types (specifically `image_url` types) and handling of the `response_format` chat completion parameter. Both of these required a few more Pydantic model definitions in our Inference API, just to move from the not-quite-right stubs I had in place to something fleshed out to match the actual OpenAI API specs. As part of testing this, I also found and fixed a bug in the litellm implementation of openai_completion and openai_chat_completion, so the providers based on those should actually be working now. The method `prepare_openai_completion_params` in `llama_stack/providers/utils/inference/openai_compat.py` was improved to actually recursively clean up input parameters, including handling of lists, dicts, and dumping of Pydantic models to dicts. These changes were required to get to 100% passing tests on the OpenAI API verification against the `openai` provider. With the above, the together.ai provider was passing as well as it is without Llama Stack. But, since we have Llama Stack in the middle, I took the opportunity to clean up the together.ai provider so that it now also passes the OpenAI API spec tests we have at 100%. That means together.ai is now passing our verification test better when using an OpenAI client talking to Llama Stack than it is when hitting together.ai directly, without Llama Stack in the middle. And, another round of work for Fireworks to improve translation of incoming OpenAI chat completion requests to Llama Stack chat completion requests gets the fireworks provider passing at 100%. The server-side fireworks.ai tool calling support with OpenAI chat completions and Llama 4 models isn't great yet, but by pointing the OpenAI clients at Llama Stack's API we can clean things up and get everything working as expected for Llama 4 models. ## Test Plan ### OpenAI API Verification Tests I ran the OpenAI API verification tests as below and 100% of the tests passed. First, start a Llama Stack server that runs the `openai` provider with the `gpt-4o` and `gpt-4o-mini` models deployed. There's not a template setup to do this out of the box, so I added a `tests/verifications/openai-api-verification-run.yaml` to do this. First, ensure you have the necessary API key environment variables set: ``` export TOGETHER_API_KEY="..." export FIREWORKS_API_KEY="..." export OPENAI_API_KEY="..." ``` Then, run a Llama Stack server that serves up all these providers: ``` llama stack run \ --image-type venv \ tests/verifications/openai-api-verification-run.yaml ``` Finally, generate a new verification report against all these providers, both with and without the Llama Stack server in the middle. ``` python tests/verifications/generate_report.py \ --run-tests \ --provider \ together \ fireworks \ groq \ openai \ together-llama-stack \ fireworks-llama-stack \ groq-llama-stack \ openai-llama-stack ``` You'll see that most of the configurations with Llama Stack in the middle now pass at 100%, even though some of them do not pass at 100% when hitting the backend provider's API directly with an OpenAI client. ### OpenAI Completion Integration Tests with vLLM: I also ran the smaller `test_openai_completion.py` test suite (that's not yet merged with the verification tests) on multiple of the providers, since I had to adjust the method signature of openai_chat_completion a bit and thus had to touch lots of these providers to match. Here's the tests I ran there, all passing: ``` VLLM_URL="http://localhost:8000/v1" INFERENCE_MODEL="meta-llama/Llama-3.2-3B-Instruct" llama stack build --template remote-vllm --image-type venv --run ``` in another terminal ``` LLAMA_STACK_CONFIG=http://localhost:8321 INFERENCE_MODEL="meta-llama/Llama-3.2-3B-Instruct" python -m pytest -v tests/integration/inference/test_openai_completion.py --text-model "meta-llama/Llama-3.2-3B-Instruct" ``` ### OpenAI Completion Integration Tests with ollama ``` INFERENCE_MODEL="llama3.2:3b-instruct-q8_0" llama stack build --template ollama --image-type venv --run ``` in another terminal ``` LLAMA_STACK_CONFIG=http://localhost:8321 INFERENCE_MODEL="llama3.2:3b-instruct-q8_0" python -m pytest -v tests/integration/inference/test_openai_completion.py --text-model "llama3.2:3b-instruct-q8_0" ``` ### OpenAI Completion Integration Tests with together.ai ``` INFERENCE_MODEL="meta-llama/Llama-3.2-3B-Instruct-Turbo" llama stack build --template together --image-type venv --run ``` in another terminal ``` LLAMA_STACK_CONFIG=http://localhost:8321 INFERENCE_MODEL="meta-llama/Llama-3.2-3B-Instruct-Turbo" python -m pytest -v tests/integration/inference/test_openai_completion.py --text-model "meta-llama/Llama-3.2-3B-Instruct-Turbo" ``` ### OpenAI Completion Integration Tests with fireworks.ai ``` INFERENCE_MODEL="meta-llama/Llama-3.1-8B-Instruct" llama stack build --template fireworks --image-type venv --run ``` in another terminal ``` LLAMA_STACK_CONFIG=http://localhost:8321 INFERENCE_MODEL="meta-llama/Llama-3.1-8B-Instruct" python -m pytest -v tests/integration/inference/test_openai_completion.py --text-model "meta-llama/Llama-3.1-8B-Instruct" --------- Signed-off-by: Ben Browning <bbrownin@redhat.com>
108 lines
3.6 KiB
Python
108 lines
3.6 KiB
Python
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
|
# All rights reserved.
|
|
#
|
|
# This source code is licensed under the terms described in the LICENSE file in
|
|
# the root directory of this source tree.
|
|
|
|
import os
|
|
from pathlib import Path
|
|
|
|
import pytest
|
|
import yaml
|
|
from openai import OpenAI
|
|
|
|
|
|
# --- Helper Function to Load Config ---
|
|
def _load_all_verification_configs():
|
|
"""Load and aggregate verification configs from the conf/ directory."""
|
|
# Note: Path is relative to *this* file (fixtures.py)
|
|
conf_dir = Path(__file__).parent.parent.parent / "conf"
|
|
if not conf_dir.is_dir():
|
|
# Use pytest.fail if called during test collection, otherwise raise error
|
|
# For simplicity here, we'll raise an error, assuming direct calls
|
|
# are less likely or can handle it.
|
|
raise FileNotFoundError(f"Verification config directory not found at {conf_dir}")
|
|
|
|
all_provider_configs = {}
|
|
yaml_files = list(conf_dir.glob("*.yaml"))
|
|
if not yaml_files:
|
|
raise FileNotFoundError(f"No YAML configuration files found in {conf_dir}")
|
|
|
|
for config_path in yaml_files:
|
|
provider_name = config_path.stem
|
|
try:
|
|
with open(config_path, "r") as f:
|
|
provider_config = yaml.safe_load(f)
|
|
if provider_config:
|
|
all_provider_configs[provider_name] = provider_config
|
|
else:
|
|
# Log warning if possible, or just skip empty files silently
|
|
print(f"Warning: Config file {config_path} is empty or invalid.")
|
|
except Exception as e:
|
|
raise IOError(f"Error loading config file {config_path}: {e}") from e
|
|
|
|
return {"providers": all_provider_configs}
|
|
|
|
|
|
# --- End Helper Function ---
|
|
|
|
|
|
@pytest.fixture(scope="session")
|
|
def verification_config():
|
|
"""Pytest fixture to provide the loaded verification config."""
|
|
try:
|
|
return _load_all_verification_configs()
|
|
except (FileNotFoundError, IOError) as e:
|
|
pytest.fail(str(e)) # Fail test collection if config loading fails
|
|
|
|
|
|
@pytest.fixture
|
|
def provider(request, verification_config):
|
|
provider = request.config.getoption("--provider")
|
|
base_url = request.config.getoption("--base-url")
|
|
|
|
if provider and base_url and verification_config["providers"][provider]["base_url"] != base_url:
|
|
raise ValueError(f"Provider {provider} is not supported for base URL {base_url}")
|
|
|
|
if not provider:
|
|
if not base_url:
|
|
raise ValueError("Provider and base URL are not provided")
|
|
for provider, metadata in verification_config["providers"].items():
|
|
if metadata["base_url"] == base_url:
|
|
provider = provider
|
|
break
|
|
|
|
return provider
|
|
|
|
|
|
@pytest.fixture
|
|
def base_url(request, provider, verification_config):
|
|
return request.config.getoption("--base-url") or verification_config["providers"][provider]["base_url"]
|
|
|
|
|
|
@pytest.fixture
|
|
def api_key(request, provider, verification_config):
|
|
provider_conf = verification_config.get("providers", {}).get(provider, {})
|
|
api_key_env_var = provider_conf.get("api_key_var")
|
|
|
|
key_from_option = request.config.getoption("--api-key")
|
|
key_from_env = os.getenv(api_key_env_var) if api_key_env_var else None
|
|
|
|
final_key = key_from_option or key_from_env
|
|
return final_key
|
|
|
|
|
|
@pytest.fixture
|
|
def model_mapping(provider, providers_model_mapping):
|
|
return providers_model_mapping[provider]
|
|
|
|
|
|
@pytest.fixture
|
|
def openai_client(base_url, api_key):
|
|
# Simplify running against a local Llama Stack
|
|
if "localhost" in base_url and not api_key:
|
|
api_key = "empty"
|
|
return OpenAI(
|
|
base_url=base_url,
|
|
api_key=api_key,
|
|
)
|