llama-stack/llama_stack/providers/tests
Mengtao Yuan 1086b500f9
Support Tavily as built-in search tool. (#485)
# What does this PR do?

Add Tavily as a built-in search tool, in addition to Brave and Bing.

## Test Plan

It's tested using ollama remote, showing parity to the Brave search
tool.
- Install and run ollama with `ollama run llama3.1:8b-instruct-fp16`
- Build ollama distribution `llama stack build --template ollama
--image-type conda`
- Run ollama `stack run
/$USER/.llama/distributions/llamastack-ollama/ollama-run.yaml --port
5001`
- Client test command: `python - m
agents.test_agents.TestAgents.test_create_agent_turn_with_tavily_search`,
with enviroments:

MASTER_ADDR=0.0.0.0;MASTER_PORT=5001;RANK=0;REMOTE_STACK_HOST=0.0.0.0;REMOTE_STACK_PORT=5001;TAVILY_SEARCH_API_KEY=tvly-<YOUR-KEY>;WORLD_SIZE=1

Test passes on the specific case (ollama remote).

Server output: 
```
Listening on ['::', '0.0.0.0']:5001
INFO:     Started server process [7220]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://['::', '0.0.0.0']:5001 (Press CTRL+C to quit)
INFO:     127.0.0.1:65209 - "POST /agents/create HTTP/1.1" 200 OK
INFO:     127.0.0.1:65210 - "POST /agents/session/create HTTP/1.1" 200 OK
INFO:     127.0.0.1:65211 - "POST /agents/turn/create HTTP/1.1" 200 OK
role='user' content='What are the latest developments in quantum computing?' context=None
role='assistant' content='' stop_reason=<StopReason.end_of_turn: 'end_of_turn'> tool_calls=[ToolCall(call_id='fc92ccb8-1039-4ce8-ba5e-8f2b0147661c', tool_name=<BuiltinTool.brave_search: 'brave_search'>, arguments={'query': 'latest developments in quantum computing'})]
role='ipython' call_id='fc92ccb8-1039-4ce8-ba5e-8f2b0147661c' tool_name=<BuiltinTool.brave_search: 'brave_search'> content='{"query": "latest developments in quantum computing", "top_k": [{"title": "IBM Unveils 400 Qubit-Plus Quantum Processor and Next-Generation IBM ...", "url": "https://newsroom.ibm.com/2022-11-09-IBM-Unveils-400-Qubit-Plus-Quantum-Processor-and-Next-Generation-IBM-Quantum-System-Two", "content": "This system is targeted to be online by the end of 2023 and will be a building b...<more>...onnect large-scale ...", "url": "https://news.mit.edu/2023/quantum-interconnects-photon-emission-0105", "content": "Quantum computers hold the promise of performing certain tasks that are intractable even on the world\'s most powerful supercomputers. In the future, scientists anticipate using quantum computing to emulate materials systems, simulate quantum chemistry, and optimize hard tasks, with impacts potentially spanning finance to pharmaceuticals.", "score": 0.71721, "raw_content": null}]}'
Assistant: The latest developments in quantum computing include:

* IBM unveiling its 400 qubit-plus quantum processor and next-generation IBM Quantum System Two, which will be a building block of quantum-centric supercomputing.
* The development of utility-scale quantum computing, which can serve as a scientific tool to explore utility-scale classes of problems in chemistry, physics, and materials beyond brute force classical simulation of quantum mechanics.
* The introduction of advanced hardware across IBM's global fleet of 100+ qubit systems, as well as easy-to-use software that users and computational scientists can now obtain reliable results from quantum systems as they map increasingly larger and more complex problems to quantum circuits.
* Research on quantum repeaters, which use defects in diamond to interconnect quantum systems and could provide the foundation for scalable quantum networking.
* The development of a new source of quantum light, which could be used to improve the efficiency of quantum computers.
* The creation of a new mathematical "blueprint" that is accelerating fusion device development using Dyson maps.
* Research on canceling noise to improve quantum devices, with MIT researchers developing a protocol to extend the life of quantum coherence.
```

Verified with tool response. The final model response is updated with
the search requests.

## Sources

## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [x] Ran pre-commit to handle lint / formatting issues.
- [x] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [x] Updated relevant documentation.
- [x] Wrote necessary unit or integration tests.

Co-authored-by: Martin Yuan <myuan@meta.com>
2024-11-19 20:59:02 -08:00
..
agents Support Tavily as built-in search tool. (#485) 2024-11-19 20:59:02 -08:00
datasetio change schema -> dataset_schema for register_dataset api (#443) 2024-11-13 11:17:46 -05:00
eval update tests --inference-model to hf id 2024-11-18 17:36:58 -08:00
inference Update condition in tests to handle llama-3.1 vs llama3.1 (HF names) 2024-11-19 13:25:36 -08:00
memory unregister for memory banks and remove update API (#458) 2024-11-14 17:12:11 -08:00
safety minor enhancement for test fixtures 2024-11-18 22:21:17 -08:00
scoring update tests --inference-model to hf id 2024-11-18 17:36:58 -08:00
__init__.py Remove "routing_table" and "routing_key" concepts for the user (#201) 2024-10-10 10:24:13 -07:00
conftest.py Kill "remote" providers and fix testing with a remote stack properly (#435) 2024-11-12 21:51:29 -08:00
env.py Significantly simpler and malleable test setup (#360) 2024-11-04 17:36:43 -08:00
README.md update tests --inference-model to hf id 2024-11-18 17:36:58 -08:00
resolver.py Auto-generate distro yamls + docs (#468) 2024-11-18 14:57:06 -08:00

Testing Llama Stack Providers

The Llama Stack is designed as a collection of Lego blocks -- various APIs -- which are composable and can be used to quickly and reliably build an app. We need a testing setup which is relatively flexible to enable easy combinations of these providers.

We use pytest and all of its dynamism to enable the features needed. Specifically:

  • We use pytest_addoption to add CLI options allowing you to override providers, models, etc.

  • We use pytest_generate_tests to dynamically parametrize our tests. This allows us to support a default set of (providers, models, etc.) combinations but retain the flexibility to override them via the CLI if needed.

  • We use pytest_configure to make sure we dynamically add appropriate marks based on the fixtures we make.

Common options

All tests support a --providers option which can be a string of the form api1=provider_fixture1,api2=provider_fixture2. So, when testing safety (which need inference and safety APIs) you can use --providers inference=together,safety=meta_reference to use these fixtures in concert.

Depending on the API, there are custom options enabled. For example, inference tests allow for an --inference-model override, etc.

By default, we disable warnings and enable short tracebacks. You can override them using pytest's flags as appropriate.

Some providers need special API keys or other configuration options to work. You can check out the individual fixtures (located in tests/<api>/fixtures.py) for what these keys are. These can be specified using the --env CLI option. You can also have it be present in the environment (exporting in your shell) or put it in the .env file in the directory from which you run the test. For example, to use the Together fixture you can use --env TOGETHER_API_KEY=<...>

Inference

We have the following orthogonal parametrizations (pytest "marks") for inference tests:

  • providers: (meta_reference, together, fireworks, ollama)
  • models: (llama_8b, llama_3b)

If you want to run a test with the llama_8b model with fireworks, you can use:

pytest -s -v llama_stack/providers/tests/inference/test_text_inference.py \
  -m "fireworks and llama_8b" \
  --env FIREWORKS_API_KEY=<...>

You can make it more complex to run both llama_8b and llama_3b on Fireworks, but only llama_3b with Ollama:

pytest -s -v llama_stack/providers/tests/inference/test_text_inference.py \
  -m "fireworks or (ollama and llama_3b)" \
  --env FIREWORKS_API_KEY=<...>

Finally, you can override the model completely by doing:

pytest -s -v llama_stack/providers/tests/inference/test_text_inference.py \
  -m fireworks \
  --inference-model "meta-llama/Llama3.1-70B-Instruct" \
  --env FIREWORKS_API_KEY=<...>

Agents

The Agents API composes three other APIs underneath:

  • Inference
  • Safety
  • Memory

Given that each of these has several fixtures each, the set of combinations is large. We provide a default set of combinations (see tests/agents/conftest.py) with easy to use "marks":

  • meta_reference -- uses all the meta_reference fixtures for the dependent APIs
  • together -- uses Together for inference, and meta_reference for the rest
  • ollama -- uses Ollama for inference, and meta_reference for the rest

An example test with Together:

pytest -s -m together llama_stack/providers/tests/agents/test_agents.py  \
 --env TOGETHER_API_KEY=<...>

If you want to override the inference model or safety model used, you can use the --inference-model or --safety-shield CLI options as appropriate.

If you wanted to test a remotely hosted stack, you can use -m remote as follows:

pytest -s -m remote llama_stack/providers/tests/agents/test_agents.py \
  --env REMOTE_STACK_URL=<...>