mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-23 23:23:57 +00:00
- test requires vLLM as provider, current is skipped in GH Action - test: >export VLLM_URL="http://localhost:8000" >pytest tests/integration/inference/test_openai_completion.py::test_openai_chat_completion_extra_body -v --stack-config="inference=remote::vllm" Signed-off-by: Wen Zhou <wenzhou@redhat.com> |
||
|---|---|---|
| .. | ||
| inline | ||
| registry | ||
| remote | ||
| utils | ||
| __init__.py | ||
| datatypes.py | ||