mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-04 18:13:44 +00:00
Some fixes to MCP tests. And a bunch of fixes for Vector providers. I also enabled a bunch of Vector IO tests to be used with `LlamaStackLibraryClient` ## Test Plan Run Responses tests with llama stack library client: ``` pytest -s -v tests/integration/non_ci/responses/ --stack-config=server:starter \ --text-model openai/gpt-4o \ --embedding-model=sentence-transformers/all-MiniLM-L6-v2 \ -k "client_with_models" ``` Do the same with `-k openai_client` The rest should be taken care of by CI. |
||
|---|---|---|
| .. | ||
| multimodal | ||
| prompt_templates | ||
| quantization | ||
| __init__.py | ||
| args.py | ||
| chat_format.py | ||
| dog.jpg | ||
| generation.py | ||
| interface.py | ||
| model.py | ||
| pasta.jpeg | ||
| template_data.py | ||
| tokenizer.model | ||
| tokenizer.py | ||
| tool_utils.py | ||