llama-stack-mirror/llama_stack/providers
Akram Ben Aissi 67728bfccf Update vLLM health check to use /health endpoint
- Replace models.list() call with HTTP GET to /health endpoint
- Remove API token validation since /health is unauthenticated
- Use urllib.parse.urljoin for cleaner URL construction
- Update tests to mock httpx.AsyncClient instead of OpenAI client
- Health check now works regardless of API token configuration

Signed-off-by: Akram Ben Aissi <akram.benaissi@gmail.com>
2025-09-15 17:57:17 +02:00
..
inline chore: Updating documentation, adding exception handling for Vector Stores in RAG Tool, more tests on migration, and migrate off of inference_api for context_retriever for RAG (#3367) 2025-09-11 14:20:11 +02:00
registry Add dynamic authentication token forwarding support for vLLM provider 2025-09-15 13:01:12 +01:00
remote Update vLLM health check to use /health endpoint 2025-09-15 17:57:17 +02:00
utils feat: migrate to FIPS-validated cryptographic algorithms (#3423) 2025-09-12 11:18:19 +02:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py feat: create unregister shield API endpoint in Llama Stack (#2853) 2025-08-05 07:33:46 -07:00