llama-stack-mirror/llama_stack/providers/remote
Dinesh Yeduguru 787e2034b7
model registration in ollama and vllm check against the available models in the provider (#446)
tests:
pytest -v -s -m "ollama"
llama_stack/providers/tests/inference/test_text_inference.py

pytest -v -s -m vllm_remote
llama_stack/providers/tests/inference/test_text_inference.py --env
VLLM_URL="http://localhost:9798/v1"

---------
2024-11-13 13:04:06 -08:00
..
agents impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00
inference model registration in ollama and vllm check against the available models in the provider (#446) 2024-11-13 13:04:06 -08:00
memory migrate memory banks to Resource and new registration (#411) 2024-11-11 17:10:44 -08:00
safety Remove the "ShieldType" concept (#430) 2024-11-12 12:37:24 -08:00
telemetry impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00