mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-07-27 14:38:49 +00:00
- Add setup-vllm GitHub action to start VLLM container - Extend integration test matrix to support both ollama and vllm providers - Make test setup conditional based on provider type - Add provider-specific environment variables and configurations - vllm tests setup to run weekly or can be triggered manually (only ollama on PR) TODO: investigate failing tests for vllm provider (safety and post_training) Also need a proper fix for #2713 (tmp fix for this in the first commit in this PR) Closes: #1648 --------- Signed-off-by: Derek Higgins <derekh@redhat.com> |
||
---|---|---|
.. | ||
__init__.py | ||
dog.png | ||
test_batch_inference.py | ||
test_embedding.py | ||
test_openai_completion.py | ||
test_openai_embeddings.py | ||
test_text_inference.py | ||
test_vision_inference.py | ||
vision_test_1.jpg | ||
vision_test_2.jpg | ||
vision_test_3.jpg |