llama-stack-mirror/llama_stack/providers
Dinesh Yeduguru 787e2034b7
model registration in ollama and vllm check against the available models in the provider (#446)
tests:
pytest -v -s -m "ollama"
llama_stack/providers/tests/inference/test_text_inference.py

pytest -v -s -m vllm_remote
llama_stack/providers/tests/inference/test_text_inference.py --env
VLLM_URL="http://localhost:9798/v1"

---------
2024-11-13 13:04:06 -08:00
..
adapters/datasetio/huggingface migrate dataset to resource (#420) 2024-11-11 17:14:41 -08:00
inline PR-437-Fixed bug to allow system instructions after first turn (#440) 2024-11-13 10:34:04 -08:00
registry add inline:: prefix for localfs provider (#441) 2024-11-13 10:44:39 -05:00
remote model registration in ollama and vllm check against the available models in the provider (#446) 2024-11-13 13:04:06 -08:00
tests model registration in ollama and vllm check against the available models in the provider (#446) 2024-11-13 13:04:06 -08:00
utils model registration in ollama and vllm check against the available models in the provider (#446) 2024-11-13 13:04:06 -08:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py Kill "remote" providers and fix testing with a remote stack properly (#435) 2024-11-12 21:51:29 -08:00