llama-stack-mirror/tests/integration/inference
Ashwin Bharambe f49cb0b717
chore: Stack server no longer depends on llama-stack-client (#4094)
This dependency has been bothering folks for a long time (cc @leseb). We
really needed it due to "library client" which is primarily used for our
tests and is not a part of the Stack server. Anyone who needs to use the
library client can certainly install `llama-stack-client` in their
environment to make that work.

Updated the notebook references to install `llama-stack-client`
additionally when setting things up.
2025-11-07 09:54:09 -08:00
..
recordings ci: vllm ci job update (#4088) 2025-11-06 16:59:55 +01:00
__init__.py fix: remove ruff N999 (#1388) 2025-03-07 11:14:04 -08:00
dog.png refactor: tests/unittests -> tests/unit; tests/api -> tests/integration 2025-03-04 09:57:00 -08:00
test_openai_completion.py fix: update tests for OpenAI-style models endpoint (#4053) 2025-11-03 17:30:08 -08:00
test_openai_embeddings.py fix: update tests for OpenAI-style models endpoint (#4053) 2025-11-03 17:30:08 -08:00
test_openai_vision_inference.py feat(internal): add image_url download feature to OpenAIMixin (#3516) 2025-09-26 17:32:16 -04:00
test_provider_data_routing.py chore: Stack server no longer depends on llama-stack-client (#4094) 2025-11-07 09:54:09 -08:00
test_rerank.py feat: Add rerank API for NVIDIA Inference Provider (#3329) 2025-10-30 21:42:09 -07:00
test_tools_with_schemas.py chore: Stack server no longer depends on llama-stack-client (#4094) 2025-11-07 09:54:09 -08:00
test_vision_inference.py chore(apis): unpublish deprecated /v1/inference apis (#3297) 2025-09-27 11:20:06 -07:00
vision_test_1.jpg feat: introduce llama4 support (#1877) 2025-04-05 11:53:35 -07:00
vision_test_2.jpg feat: introduce llama4 support (#1877) 2025-04-05 11:53:35 -07:00
vision_test_3.jpg feat: introduce llama4 support (#1877) 2025-04-05 11:53:35 -07:00