This website requires JavaScript.
Explore
Help
Sign in
phoenix-oss
/
llama-stack-mirror
Watch
0
Star
0
Fork
You've already forked llama-stack-mirror
1
mirror of
https://github.com/meta-llama/llama-stack.git
synced
2025-12-29 11:02:21 +00:00
Code
Issues
Projects
Releases
Packages
Wiki
Activity
Actions
f1fd382d51
llama-stack-mirror
/
llama_stack
/
providers
/
remote
/
inference
/
vllm
History
Download ZIP
Download TAR.GZ
ilya-kolchinsky
f1fd382d51
Remote vLLM client is now initialized directly prior to first use instead of VLLMInferenceAdapter.initialize.
2025-04-16 13:54:51 +02:00
..
__init__.py
Fix precommit check after moving to ruff (
#927
)
2025-02-02 06:46:45 -08:00
config.py
fix: Add the option to not verify SSL at remote-vllm provider (
#1585
)
2025-03-18 09:33:35 -04:00
vllm.py
Remote vLLM client is now initialized directly prior to first use instead of VLLMInferenceAdapter.initialize.
2025-04-16 13:54:51 +02:00