llama-stack-mirror/llama_stack/distribution/routers
Hardik Shah 822307e6d5
fix: Do not throw when listing vector stores (#2460)
When trying to `list` vector_stores , if we cannot retrieve one, log an
error and return all the ones that are valid.

### Test Plan 
```
pytest -sv --stack-config=http://localhost:8321 tests/integration/vector_io/test_openai_vector_stores.py  --embedding-model all-MiniLM-L6-v2
```
Also tested for `--stack-config fireworks`
2025-06-17 11:19:43 -07:00
..
__init__.py feat: fine grained access control policy (#2264) 2025-06-03 14:51:12 -07:00
datasets.py chore: split routers into individual files (datasets) (#2249) 2025-05-24 22:11:43 -07:00
eval_scoring.py chore: split routers into individual files (inference, tool, vector_io, eval_scoring) (#2258) 2025-05-24 22:59:07 -07:00
inference.py feat: Add suffix to openai_completions (#2449) 2025-06-13 16:06:06 -07:00
safety.py chore: split routers into individual files (safety) 2025-05-24 22:00:32 -07:00
tool_runtime.py fix(tools): do not index tools, only index toolgroups (#2261) 2025-05-25 13:27:52 -07:00
vector_io.py fix: Do not throw when listing vector stores (#2460) 2025-06-17 11:19:43 -07:00