llama-stack-mirror/llama_stack/providers/utils/memory
Ben Browning 0f7d487dca Still retrieve the file_response in openai_vector_store_mixin
This is needed to get the filename of our file, even though we don't
need its actual contents here anymore.

Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-06-27 13:31:40 -04:00
..
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
file_utils.py Update the "InterleavedTextMedia" type (#635) 2024-12-17 11:18:31 -08:00
openai_vector_store_mixin.py Still retrieve the file_response in openai_vector_store_mixin 2025-06-27 13:31:40 -04:00
vector_store.py feat: Add ChunkMetadata to Chunk (#2497) 2025-06-25 15:55:23 -04:00