llama-stack-mirror/llama_stack/providers/utils
Dinesh Yeduguru 787e2034b7
model registration in ollama and vllm check against the available models in the provider (#446)
tests:
pytest -v -s -m "ollama"
llama_stack/providers/tests/inference/test_text_inference.py

pytest -v -s -m vllm_remote
llama_stack/providers/tests/inference/test_text_inference.py --env
VLLM_URL="http://localhost:9798/v1"

---------
2024-11-13 13:04:06 -08:00
..
bedrock add bedrock distribution code (#358) 2024-11-06 14:39:11 -08:00
datasetio [Evals API][11/n] huggingface dataset provider + mmlu scoring fn (#392) 2024-11-11 14:49:50 -05:00
inference model registration in ollama and vllm check against the available models in the provider (#446) 2024-11-13 13:04:06 -08:00
kvstore fix postgres config validation (#380) 2024-11-05 15:09:04 -08:00
memory migrate memory banks to Resource and new registration (#411) 2024-11-11 17:10:44 -08:00
scoring fix tests after registration migration & rename meta-reference -> basic / llm_as_judge provider (#424) 2024-11-12 10:35:44 -05:00
telemetry telemetry WARNING->WARN fix 2024-10-21 18:52:48 -07:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00