llama-stack/llama_stack
Sébastien Han 403292fcf6
test: replace memory with vector_io fixture (#984)
# What does this PR do?

Replaced references to `memory` with `vector_io` in
`DEFAULT_PROVIDER_COMBINATIONS` and adjusted corresponding fixture
imports to ensure proper configuration for vector I/O during tests. This
change aligns with the new testing structure.

Followup of https://github.com/meta-llama/llama-stack/pull/830 when the
memory fixture was removed.

Signed-off-by: Sébastien Han <seb@redhat.com>

## Test Plan

Please describe:
 - tests you ran to verify your changes with result summaries.
 - provide instructions so it can be reproduced.


## Sources

Please link relevant resources if necessary.


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-02-06 10:12:59 -08:00
..
apis sys_prompt support in Agent (#938) 2025-02-05 21:11:32 -08:00
cli chore: remove unused argument (#987) 2025-02-06 10:05:35 -08:00
distribution if client.initialize fails, the example should exit (#954) 2025-02-04 13:54:21 -08:00
providers test: replace memory with vector_io fixture (#984) 2025-02-06 10:12:59 -08:00
scripts Fix precommit check after moving to ruff (#927) 2025-02-02 06:46:45 -08:00
templates Fix precommit check after moving to ruff (#927) 2025-02-02 06:46:45 -08:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00