mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-03 09:53:45 +00:00
Inline _inference_ providers haven't proved to be very useful -- they
are rarely used. And for good reason -- it is almost never a good idea
to include a complex (distributed) inference engine bundled into a
distributed stateful front-end server serving many other things.
Responsibility should be split properly.
See Discord discussion:
|
||
|---|---|---|
| .. | ||
| advanced_apis | ||
| building_applications | ||
| concepts | ||
| contributing | ||
| deploying | ||
| distributions | ||
| getting_started | ||
| providers | ||
| references | ||
| conf.py | ||
| index.md | ||