llama-stack-mirror/llama_stack/models/llama/llama3
Ashwin Bharambe 3d90117891
chore(tests): fix responses and vector_io tests (#3119)
Some fixes to MCP tests. And a bunch of fixes for Vector providers.

I also enabled a bunch of Vector IO tests to be used with
`LlamaStackLibraryClient`

## Test Plan

Run Responses tests with llama stack library client:
```
pytest -s -v tests/integration/non_ci/responses/ --stack-config=server:starter \
   --text-model openai/gpt-4o \
  --embedding-model=sentence-transformers/all-MiniLM-L6-v2 \
  -k "client_with_models"
```

Do the same with `-k openai_client`

The rest should be taken care of by CI.
2025-08-12 16:15:53 -07:00
..
multimodal chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
prompt_templates chore: more mypy fixes (#2029) 2025-05-06 09:52:31 -07:00
quantization chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
__init__.py chore: remove dependency on llama_models completely (#1344) 2025-03-01 12:48:08 -08:00
args.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
chat_format.py chore(tests): fix responses and vector_io tests (#3119) 2025-08-12 16:15:53 -07:00
dog.jpg chore: move all Llama Stack types from llama-models to llama-stack (#1098) 2025-02-14 09:10:59 -08:00
generation.py chore: make cprint write to stderr (#2250) 2025-05-24 23:39:57 -07:00
interface.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
model.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
pasta.jpeg chore: move all Llama Stack types from llama-models to llama-stack (#1098) 2025-02-14 09:10:59 -08:00
template_data.py refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
tokenizer.model chore: remove dependency on llama_models completely (#1344) 2025-03-01 12:48:08 -08:00
tokenizer.py chore: remove usage of load_tiktoken_bpe (#2276) 2025-06-02 07:33:37 -07:00
tool_utils.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00