llama-stack-mirror/tests/unit/providers
Francisco Javier Arceo 2d149e3d2d
feat: Enhance Vector Stores config with full configurations (#4397)
# What does this PR do?

Enhances the Vector Stores config with full set of appropriate
configurations
- Add FileIngestionParams, ChunkRetrievalParams, and FileBatchParams
subconfigs
- Update RAG memory, OpenAI vector store mixin, and vector store utils
to use configuration
  - Fix import organization across vector store components
  - Add comprehensive vector stores configuration documentation
  - Update docs navigation to include vector store configuration guide
- Delete `memory/constants.py` and move constant values directly into
Pydantic models

## Test Plan
Tests updated + CI

---------

Signed-off-by: Francisco Javier Arceo <farceo@redhat.com>
2025-12-17 16:56:46 -05:00
..
agents/meta_reference feat: add support for tool_choice to responses api (#4106) 2025-12-15 11:22:06 -08:00
batches feat: Implement FastAPI router system (#4191) 2025-12-03 12:25:54 +01:00
files refactor(storage): make { kvstore, sqlstore } as llama stack "internal" APIs (#4181) 2025-11-18 13:15:16 -08:00
inference feat!: change bedrock bearer token env variable to match AWS docs & boto3 convention (#4152) 2025-11-21 09:48:05 -05:00
inline fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
nvidia feat!: standardize base_url for inference (#4177) 2025-11-19 08:44:28 -08:00
utils fix(inference): AttributeError in streaming response cleanup (#4236) 2025-12-14 07:51:09 -05:00
vector_io feat: Enhance Vector Stores config with full configurations (#4397) 2025-12-17 16:56:46 -05:00
test_bedrock.py fix: rename llama_stack_api dir (#4155) 2025-11-13 15:04:36 -08:00
test_configs.py feat!: standardize base_url for inference (#4177) 2025-11-19 08:44:28 -08:00