mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-03 09:53:45 +00:00
Inline _inference_ providers haven't proved to be very useful -- they
are rarely used. And for good reason -- it is almost never a good idea
to include a complex (distributed) inference engine bundled into a
distributed stateful front-end server serving many other things.
Responsibility should be split properly.
See Discord discussion:
|
||
|---|---|---|
| .. | ||
| meta-reference-gpu | ||
| nvidia | ||
| open-benchmark | ||
| postgres-demo | ||
| starter | ||
| watsonx | ||
| __init__.py | ||
| template.py | ||