feat: switch to async completion in LiteLLM OpenAI mixin (#3029)
Some checks failed
Integration Tests (Replay) / discover-tests (push) Successful in 3s
Test External Providers Installed via Module / test-external-providers-from-module (venv) (push) Has been skipped
Integration Auth Tests / test-matrix (oauth2_token) (push) Failing after 8s
Vector IO Integration Tests / test-matrix (3.12, remote::qdrant) (push) Failing after 12s
Vector IO Integration Tests / test-matrix (3.13, remote::pgvector) (push) Failing after 11s
Vector IO Integration Tests / test-matrix (3.12, inline::sqlite-vec) (push) Failing after 15s
Vector IO Integration Tests / test-matrix (3.12, inline::milvus) (push) Failing after 17s
Vector IO Integration Tests / test-matrix (3.13, remote::weaviate) (push) Failing after 13s
Unit Tests / unit-tests (3.12) (push) Failing after 11s
Python Package Build Test / build (3.13) (push) Failing after 12s
Vector IO Integration Tests / test-matrix (3.12, remote::pgvector) (push) Failing after 17s
SqlStore Integration Tests / test-postgres (3.13) (push) Failing after 19s
Vector IO Integration Tests / test-matrix (3.13, inline::milvus) (push) Failing after 15s
Vector IO Integration Tests / test-matrix (3.13, inline::faiss) (push) Failing after 15s
Vector IO Integration Tests / test-matrix (3.13, remote::chromadb) (push) Failing after 16s
Python Package Build Test / build (3.12) (push) Failing after 17s
Integration Tests (Replay) / Integration Tests (, , , client=, vision=) (push) Failing after 15s
Vector IO Integration Tests / test-matrix (3.12, remote::chromadb) (push) Failing after 21s
SqlStore Integration Tests / test-postgres (3.12) (push) Failing after 24s
Vector IO Integration Tests / test-matrix (3.13, inline::sqlite-vec) (push) Failing after 20s
Vector IO Integration Tests / test-matrix (3.12, inline::faiss) (push) Failing after 29s
Vector IO Integration Tests / test-matrix (3.12, remote::weaviate) (push) Failing after 27s
Test External API and Providers / test-external (venv) (push) Failing after 23s
Vector IO Integration Tests / test-matrix (3.13, remote::qdrant) (push) Failing after 25s
Unit Tests / unit-tests (3.13) (push) Failing after 25s
Pre-commit / pre-commit (push) Successful in 1m10s

This commit is contained in:
Eran Cohen 2025-08-03 22:08:56 +03:00 committed by GitHub
parent dbfc15123e
commit e5b542dd8e
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
2 changed files with 4 additions and 5 deletions

View file

@ -158,9 +158,8 @@ class LiteLLMOpenAIMixin(
params["model"] = self.get_litellm_model_name(params["model"]) params["model"] = self.get_litellm_model_name(params["model"])
logger.debug(f"params to litellm (openai compat): {params}") logger.debug(f"params to litellm (openai compat): {params}")
# unfortunately, we need to use synchronous litellm.completion here because litellm # see https://docs.litellm.ai/docs/completion/stream#async-completion
# caches various httpx.client objects in a non-eventloop aware manner response = await litellm.acompletion(**params)
response = litellm.completion(**params)
if stream: if stream:
return self._stream_chat_completion(response) return self._stream_chat_completion(response)
else: else:
@ -170,7 +169,7 @@ class LiteLLMOpenAIMixin(
self, response: litellm.ModelResponse self, response: litellm.ModelResponse
) -> AsyncIterator[ChatCompletionResponseStreamChunk]: ) -> AsyncIterator[ChatCompletionResponseStreamChunk]:
async def _stream_generator(): async def _stream_generator():
for chunk in response: async for chunk in response:
yield chunk yield chunk
async for chunk in convert_openai_chat_completion_stream( async for chunk in convert_openai_chat_completion_stream(

View file

@ -78,7 +78,7 @@
}, },
{ {
"role": "user", "role": "user",
"content": "What's the weather like in San Francisco?" "content": "What's the weather like in San Francisco, CA?"
} }
], ],
"tools": [ "tools": [