llama-stack-mirror/tests/integration/vector_io
Eric Huang 2367a4ff80 v0
# What does this PR do?


## Test Plan
OpenAI processes file attachments asynchronously. Don't mark files as
"completed" immediately after attachment. Instead:

1. Return the status from OpenAI's API response when attaching files
2. Override openai_retrieve_vector_store_file() to check actual status from OpenAI
   when status is "in_progress" and update the cached status
3. Update file counts in vector store metadata when status changes

This allows clients to poll the file status and get accurate processing updates
instead of getting an incorrect "completed" status before OpenAI has finished.
2025-11-03 21:25:18 -08:00
..
recordings chore!: BREAKING CHANGE removing VectorDB APIs (#3774) 2025-10-11 14:07:08 -07:00
__init__.py fix: remove ruff N999 (#1388) 2025-03-07 11:14:04 -08:00
test_openai_vector_stores.py v0 2025-11-03 21:25:18 -08:00
test_vector_io.py fix!: remove chunk_id property from Chunk class (#3954) 2025-10-29 18:59:59 -07:00