llama-stack-mirror/llama_stack/providers/remote
Varsha 3c2aee610d
refactor: Remove double filtering based on score threshold (#3019)
# What does this PR do?
Remove score_threshold based check from `OpenAIVectorStoreMixin`

Closes: https://github.com/meta-llama/llama-stack/issues/3018

<!-- If resolving an issue, uncomment and update the line below -->
<!-- Closes #[issue-number] -->

## Test Plan
<!-- Describe the tests you ran to verify your changes with result
summaries. *Provide clear instructions so the plan can be easily
re-executed.* -->
2025-08-02 15:57:03 -07:00
..
agents test: add unit test to ensure all config types are instantiable (#1601) 2025-03-12 22:29:58 -07:00
datasetio refactor: remove Conda support from Llama Stack (#2969) 2025-08-02 15:52:59 -07:00
eval chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
inference refactor: remove Conda support from Llama Stack (#2969) 2025-08-02 15:52:59 -07:00
post_training refactor: remove Conda support from Llama Stack (#2969) 2025-08-02 15:52:59 -07:00
safety refactor: remove Conda support from Llama Stack (#2969) 2025-08-02 15:52:59 -07:00
tool_runtime chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
vector_io refactor: Remove double filtering based on score threshold (#3019) 2025-08-02 15:57:03 -07:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00