Summary:
Implementing Memory provider fakes as discussed in this draft https://github.com/meta-llama/llama-stack/pull/490#issuecomment-2492877393.
High level changes:
* Fake provider is specified via the "fake" mark
* Test config will setup a fake fixture for the run of the test
* Test resolver checks fixtures and upon finding a fake provider it injects InlineProviderSpec for fake provider
* Fake provider gets resolved through path/naming convention
* Fake provider implementaion is contained to the tests/ directory and implements stubs and method fakes with minimal functionality to simulate real provider
Instructins to creating a fake
* Create the "fakes" module inside the provider test directory
* Inside the module implement `get_provider_impl` that will return fake implementation object
* Name the fake implementation class to match the fake provider id (e.g. memory_fake -> MemoryFakeImpl)
* Same rule for the config (e.g. memory_fake -> MemoryFakeConfig)
* Add fake fixture (in the fixtures.py) and setup methods stubs there
Test Plan:
Run memory tests
```
pytest llama_stack/providers/tests/memory/test_memory.py -m "fake" -v -s --tb=short
====================================================================================================== test session starts ======================================================================================================
platform darwin -- Python 3.11.10, pytest-8.3.3, pluggy-1.5.0 -- /opt/homebrew/Caskroom/miniconda/base/envs/llama-stack/bin/python
cachedir: .pytest_cache
rootdir: /Users/vivic/Code/llama-stack
configfile: pyproject.toml
plugins: asyncio-0.24.0, anyio-4.6.2.post1
asyncio: mode=Mode.STRICT, default_loop_scope=None
collected 18 items / 15 deselected / 3 selected
llama_stack/providers/tests/memory/test_memory.py::TestMemory::test_banks_list[fake] PASSED
llama_stack/providers/tests/memory/test_memory.py::TestMemory::test_banks_register[fake] PASSED
llama_stack/providers/tests/memory/test_memory.py::TestMemory::test_query_documents[fake] The scores are: [0.5]
PASSED
========================================================================================= 3 passed, 15 deselected, 10 warnings in 0.46s =========================================================================================
```
# What does this PR do?
This PR kills the notion of "pure passthrough" remote providers. You
cannot specify a single provider you must specify a whole distribution
(stack) as remote.
This PR also significantly fixes / upgrades testing infrastructure so
you can now test against a remotely hosted stack server by just doing
```bash
pytest -s -v -m remote test_agents.py \
--inference-model=Llama3.1-8B-Instruct --safety-shield=Llama-Guard-3-1B \
--env REMOTE_STACK_URL=http://localhost:5001
```
Also fixed `test_agents_persistence.py` (which was broken) and killed
some deprecated testing functions.
## Test Plan
All the tests.
# What does this PR do?
This PR brings back the facility to not force registration of resources
onto the user. This is not just annoying but actually not feasible
sometimes. For example, you may have a Stack which boots up with private
providers for inference for models A and B. There is no way for the user
to actually know which model is being served by these providers now (to
be able to register it.)
How will this avoid the users needing to do registration? In a follow-up
diff, I will make sure I update the sample run.yaml files so they list
the models served by the distributions explicitly. So when users do
`llama stack build --template <...>` and run it, their distributions
come up with the right set of models they expect.
For self-hosted distributions, it also allows us to have a place to
explicit list the models that need to be served to make the "complete"
stack (including safety, e.g.)
## Test Plan
Started ollama locally with two lightweight models: Llama3.2-3B-Instruct
and Llama-Guard-3-1B.
Updated all the tests including agents. Here's the tests I ran so far:
```bash
pytest -s -v -m "fireworks and llama_3b" test_text_inference.py::TestInference \
--env FIREWORKS_API_KEY=...
pytest -s -v -m "ollama and llama_3b" test_text_inference.py::TestInference
pytest -s -v -m ollama test_safety.py
pytest -s -v -m faiss test_memory.py
pytest -s -v -m ollama test_agents.py \
--inference-model=Llama3.2-3B-Instruct --safety-model=Llama-Guard-3-1B
```
Found a few bugs here and there pre-existing that these test runs fixed.
* Significantly simpler and malleable test setup
* convert memory tests
* refactor fixtures and add support for composable fixtures
* Fix memory to use the newer fixture organization
* Get agents tests working
* Safety tests work
* yet another refactor to make this more general
now it accepts --inference-model, --safety-model options also
* get multiple providers working for meta-reference (for inference + safety)
* Add README.md
---------
Co-authored-by: Ashwin Bharambe <ashwin@meta.com>