forked from phoenix-oss/llama-stack-mirror
# What does this PR do? This provides an initial [OpenAI Responses API](https://platform.openai.com/docs/api-reference/responses) implementation. The API is not yet complete, and this is more a proof-of-concept to show how we can store responses in our key-value stores and use them to support the Responses API concepts like `previous_response_id`. ## Test Plan I've added a new `tests/integration/openai_responses/test_openai_responses.py` as part of a test-driven development for this new API. I'm only testing this locally with the remote-vllm provider for now, but it should work with any of our inference providers since the only API it requires out of the inference provider is the `openai_chat_completion` endpoint. ``` VLLM_URL="http://localhost:8000/v1" \ INFERENCE_MODEL="meta-llama/Llama-3.2-3B-Instruct" \ llama stack build --template remote-vllm --image-type venv --run ``` ``` LLAMA_STACK_CONFIG="http://localhost:8321" \ python -m pytest -v \ tests/integration/openai_responses/test_openai_responses.py \ --text-model "meta-llama/Llama-3.2-3B-Instruct" ``` --------- Signed-off-by: Ben Browning <bbrownin@redhat.com> Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com>
17 lines
664 B
YAML
17 lines
664 B
YAML
base_url: http://localhost:8321/v1/openai/v1
|
|
api_key_var: FIREWORKS_API_KEY
|
|
models:
|
|
- fireworks/llama-v3p3-70b-instruct
|
|
- fireworks/llama4-scout-instruct-basic
|
|
- fireworks/llama4-maverick-instruct-basic
|
|
model_display_names:
|
|
fireworks/llama-v3p3-70b-instruct: Llama-3.3-70B-Instruct
|
|
fireworks/llama4-scout-instruct-basic: Llama-4-Scout-Instruct
|
|
fireworks/llama4-maverick-instruct-basic: Llama-4-Maverick-Instruct
|
|
test_exclusions:
|
|
fireworks/llama-v3p3-70b-instruct:
|
|
- test_chat_non_streaming_image
|
|
- test_chat_streaming_image
|
|
- test_chat_multi_turn_multiple_images
|
|
- test_response_non_streaming_image
|
|
- test_response_non_streaming_multi_turn_image
|