mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-04 18:13:44 +00:00
# What does this PR do? ## Test Plan OpenAI processes file attachments asynchronously. Don't mark files as "completed" immediately after attachment. Instead: 1. Return the status from OpenAI's API response when attaching files 2. Override openai_retrieve_vector_store_file() to check actual status from OpenAI when status is "in_progress" and update the cached status 3. Update file counts in vector store metadata when status changes This allows clients to poll the file status and get accurate processing updates instead of getting an incorrect "completed" status before OpenAI has finished. |
||
|---|---|---|
| .. | ||
| recordings | ||
| __init__.py | ||
| test_openai_vector_stores.py | ||
| test_vector_io.py | ||