llama-stack-mirror/llama_stack/providers
Sébastien Han b9961c8735
fix: add token to the openai request
OpenAIMixin expects to use an API key and creates its own AsyncOpenAI
client. So our code now authenticate with the Google service, retrieves
a token and pass it to the OpenAI client.
Falls back to an empty string if credentials can't be obtained (letting
LiteLLM handle ADC directly).

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-09-10 15:22:10 +02:00
..
inline feat: Add vector_db_id to chunk metadata (#3304) 2025-09-10 11:19:21 +02:00
registry chore: update the vertexai inference impl to use openai-python for openai-compat functions 2025-09-10 15:22:10 +02:00
remote fix: add token to the openai request 2025-09-10 15:22:10 +02:00
utils fix: use lambda pattern for bedrock config env vars (#3307) 2025-09-05 10:45:11 +02:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py feat: create unregister shield API endpoint in Llama Stack (#2853) 2025-08-05 07:33:46 -07:00