llama-stack-mirror/llama_stack
Ben Browning 0f7d487dca Still retrieve the file_response in openai_vector_store_mixin
This is needed to get the filename of our file, even though we don't
need its actual contents here anymore.

Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-06-27 13:31:40 -04:00
..
apis fix: finish conversion to StrEnum (#2514) 2025-06-26 08:01:26 +05:30
cli fix: stack build (#2485) 2025-06-20 15:15:43 -07:00
distribution fix: dataset metadata without provider_id (#2527) 2025-06-27 08:51:29 -04:00
models fix: finish conversion to StrEnum (#2514) 2025-06-26 08:01:26 +05:30
providers Still retrieve the file_response in openai_vector_store_mixin 2025-06-27 13:31:40 -04:00
strong_typing chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
templates feat: Add synthetic-data-kit for file_search doc conversion 2025-06-27 13:31:38 -04:00
ui fix(ui): ensure initial data fetch only happens once (#2486) 2025-06-24 12:22:55 +02:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00
env.py refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
log.py chore: remove nested imports (#2515) 2025-06-26 08:01:05 +05:30
schema_utils.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00