This website requires JavaScript.
Explore
Help
Sign in
phoenix-oss
/
llama-stack-mirror
Watch
0
Star
0
Fork
You've already forked llama-stack-mirror
1
mirror of
https://github.com/meta-llama/llama-stack.git
synced
2025-07-06 05:59:13 +00:00
Code
Issues
Projects
Releases
Packages
Wiki
Activity
Actions
a2ff74a686
llama-stack-mirror
/
llama_stack
/
providers
/
adapters
History
Download ZIP
Download TAR.GZ
Yuan Tang
a27a2cd2af
Add vLLM inference provider for OpenAI compatible vLLM server (
#178
)
...
This PR adds vLLM inference provider for OpenAI compatible vLLM server.
2024-10-20 18:43:25 -07:00
..
agents
[API Updates] Model / shield / memory-bank routing + agent persistence + support for private headers (
#92
)
2024-09-23 14:22:22 -07:00
inference
Add vLLM inference provider for OpenAI compatible vLLM server (
#178
)
2024-10-20 18:43:25 -07:00
memory
Remove "routing_table" and "routing_key" concepts for the user (
#201
)
2024-10-10 10:24:13 -07:00
safety
Remove "routing_table" and "routing_key" concepts for the user (
#201
)
2024-10-10 10:24:13 -07:00
telemetry
[API Updates] Model / shield / memory-bank routing + agent persistence + support for private headers (
#92
)
2024-09-23 14:22:22 -07:00
__init__.py
API Updates (
#73
)
2024-09-17 19:51:35 -07:00