llama-stack-mirror/llama_stack
Dinesh Yeduguru 787e2034b7
model registration in ollama and vllm check against the available models in the provider (#446)
tests:
pytest -v -s -m "ollama"
llama_stack/providers/tests/inference/test_text_inference.py

pytest -v -s -m vllm_remote
llama_stack/providers/tests/inference/test_text_inference.py --env
VLLM_URL="http://localhost:9798/v1"

---------
2024-11-13 13:04:06 -08:00
..
apis change schema -> dataset_schema for register_dataset api (#443) 2024-11-13 11:17:46 -05:00
cli Rename all inline providers with an inline:: prefix (#423) 2024-11-11 22:19:16 -08:00
distribution allow seeing warnings with traces optionally 2024-11-13 12:27:19 -08:00
providers model registration in ollama and vllm check against the available models in the provider (#446) 2024-11-13 13:04:06 -08:00
scripts Add a test for CLI, but not fully done so disabled 2024-09-19 13:27:07 -07:00
templates Update provider types and prefix with inline:: 2024-11-12 12:54:44 -08:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00