mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-08-15 14:08:00 +00:00
Some fixes to MCP tests. And a bunch of fixes for Vector providers. I also enabled a bunch of Vector IO tests to be used with `LlamaStackLibraryClient` ## Test Plan Run Responses tests with llama stack library client: ``` pytest -s -v tests/integration/non_ci/responses/ --stack-config=server:starter \ --text-model openai/gpt-4o \ --embedding-model=sentence-transformers/all-MiniLM-L6-v2 \ -k "client_with_models" ``` Do the same with `-k openai_client` The rest should be taken care of by CI. |
||
---|---|---|
.. | ||
llama3 | ||
llama3_1 | ||
llama3_2 | ||
llama3_3 | ||
llama4 | ||
resources | ||
__init__.py | ||
checkpoint.py | ||
datatypes.py | ||
hadamard_utils.py | ||
prompt_format.py | ||
quantize_impls.py | ||
sku_list.py | ||
sku_types.py | ||
tokenizer_utils.py |