This PR makes several core changes to the developer experience surrounding Llama Stack.
Background: PR #92 introduced the notion of "routing" to the Llama Stack. It introduces three object types: (1) models, (2) shields and (3) memory banks. Each of these objects can be associated with a distinct provider. So you can get model A to be inferenced locally while model B, C can be inference remotely (e.g.)
However, this had a few drawbacks:
you could not address the provider instances -- i.e., if you configured "meta-reference" with a given model, you could not assign an identifier to this instance which you could re-use later.
the above meant that you could not register a "routing_key" (e.g. model) dynamically and say "please use this existing provider I have already configured" for a new model.
the terms "routing_table" and "routing_key" were exposed directly to the user. in my view, this is way too much overhead for a new user (which almost everyone is.) people come to the stack wanting to do ML and encounter a completely unexpected term.
What this PR does: This PR structures the run config with only a single prominent key:
- providers
Providers are instances of configured provider types. Here's an example which shows two instances of the remote::tgi provider which are serving two different models.
providers:
inference:
- provider_id: foo
provider_type: remote::tgi
config: { ... }
- provider_id: bar
provider_type: remote::tgi
config: { ... }
Secondly, the PR adds dynamic registration of { models | shields | memory_banks } to the API surface. The distribution still acts like a "routing table" (as previously) except that it asks the backing providers for a listing of these objects. For example it asks a TGI or Ollama inference adapter what models it is serving. Only the models that are being actually served can be requested by the user for inference. Otherwise, the Stack server will throw an error.
When dynamically registering these objects, you can use the provider IDs shown above. Info about providers can be obtained using the Api.inspect set of endpoints (/providers, /routes, etc.)
The above examples shows the correspondence between inference providers and models registry items. Things work similarly for the safety <=> shields and memory <=> memory_banks pairs.
Registry: This PR also makes it so that Providers need to implement additional methods for registering and listing objects. For example, each Inference provider is now expected to implement the ModelsProtocolPrivate protocol (naming is not great!) which consists of two methods
register_model
list_models
The goal is to inform the provider that a certain model needs to be supported so the provider can make any relevant backend changes if needed (or throw an error if the model cannot be supported.)
There are many other cleanups included some of which are detailed in a follow-up comment.
This is yet another of those large PRs (hopefully we will have less and less of them as things mature fast). This one introduces substantial improvements and some simplifications to the stack.
Most important bits:
* Agents reference implementation now has support for session / turn persistence. The default implementation uses sqlite but there's also support for using Redis.
* We have re-architected the structure of the Stack APIs to allow for more flexible routing. The motivating use cases are:
- routing model A to ollama and model B to a remote provider like Together
- routing shield A to local impl while shield B to a remote provider like Bedrock
- routing a vector memory bank to Weaviate while routing a keyvalue memory bank to Redis
* Support for provider specific parameters to be passed from the clients. A client can pass data using `x_llamastack_provider_data` parameter which can be type-checked and provided to the Adapter implementations.
* API Keys passed from Client instead of distro configuration
* delete distribution registry
* Rename the "package" word away
* Introduce a "Router" layer for providers
Some providers need to be factorized and considered as thin routing
layers on top of other providers. Consider two examples:
- The inference API should be a routing layer over inference providers,
routed using the "model" key
- The memory banks API is another instance where various memory bank
types will be provided by independent providers (e.g., a vector store
is served by Chroma while a keyvalue memory can be served by Redis or
PGVector)
This commit introduces a generalized routing layer for this purpose.
* update `apis_to_serve`
* llama_toolchain -> llama_stack
* Codemod from llama_toolchain -> llama_stack
- added providers/registry
- cleaned up api/ subdirectories and moved impls away
- restructured api/api.py
- from llama_stack.apis.<api> import foo should work now
- update imports to do llama_stack.apis.<api>
- update many other imports
- added __init__, fixed some registry imports
- updated registry imports
- create_agentic_system -> create_agent
- AgenticSystem -> Agent
* Moved some stuff out of common/; re-generated OpenAPI spec
* llama-toolchain -> llama-stack (hyphens)
* add control plane API
* add redis adapter + sqlite provider
* move core -> distribution
* Some more toolchain -> stack changes
* small naming shenanigans
* Removing custom tool and agent utilities and moving them client side
* Move control plane to distribution server for now
* Remove control plane from API list
* no codeshield dependency randomly plzzzzz
* Add "fire" as a dependency
* add back event loggers
* stack configure fixes
* use brave instead of bing in the example client
* add init file so it gets packaged
* add init files so it gets packaged
* Update MANIFEST
* bug fix
---------
Co-authored-by: Hardik Shah <hjshah@fb.com>
Co-authored-by: Xi Yan <xiyan@meta.com>
Co-authored-by: Ashwin Bharambe <ashwin@meta.com>
* [1/n] migrate inference/chat_completion
* migrate inference/completion
* inference/completion
* inference regenerate openapi spec
* safety api
* migrate agentic system
* migrate apis without implementations
* re-generate openapi spec
* remove hack from openapi generator
* fix inference
* fix inference
* openapi generator rerun
* Simplified Telemetry API and tying it to logger (#57)
* Simplified Telemetry API and tying it to logger
* small update which adds a METRIC type
* move span events one level down into structured log events
---------
Co-authored-by: Ashwin Bharambe <ashwin@meta.com>
* fix api to work with openapi generator
* fix agentic calling inference
* together adapter inference
* update inference adapters
---------
Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com>
Co-authored-by: Ashwin Bharambe <ashwin@meta.com>
* add tools to chat completion request
* use templates for generating system prompts
* Moved ToolPromptFormat and jinja templates to llama_models.llama3.api
* <WIP> memory changes
- inlined AgenticSystemInstanceConfig so API feels more ergonomic
- renamed it to AgentConfig, AgentInstance -> Agent
- added a MemoryConfig and `memory` parameter
- added `attachments` to input and `output_attachments` to the response
- some naming changes
* InterleavedTextAttachment -> InterleavedTextMedia, introduce memory tool
* flesh out memory banks API
* agentic loop has a RAG implementation
* faiss provider implementation
* memory client works
* re-work tool definitions, fix FastAPI issues, fix tool regressions
* fix agentic_system utils
* basic RAG seems to work
* small bug fixes for inline attachments
* Refactor custom tool execution utilities
* Bug fix, show memory retrieval steps in EventLogger
* No need for api_key for Remote providers
* add special unicode character ↵ to showcase newlines in model prompt templates
* remove api.endpoints imports
* combine datatypes.py and endpoints.py into api.py
* Attachment / add TTL api
* split batch_inference from inference
* minor import fixes
* use a single impl for ChatFormat.decode_assistant_mesage
* use interleaved_text_media_as_str() utilityt
* Fix api.datatypes imports
* Add blobfile for tiktoken
* Add ToolPromptFormat to ChatFormat.encode_message so that tools are encoded properly
* templates take optional --format={json,function_tag}
* Rag Updates
* Add `api build` subcommand -- WIP
* fix
* build + run image seems to work
* <WIP> adapters
* bunch more work to make adapters work
* api build works for conda now
* ollama remote adapter works
* Several smaller fixes to make adapters work
Also, reorganized the pattern of __init__ inside providers so
configuration can stay lightweight
* llama distribution -> llama stack + containers (WIP)
* All the new CLI for api + stack work
* Make Fireworks and Together into the Adapter format
* Some quick fixes to the CLI behavior to make it consistent
* Updated README phew
* Update cli_reference.md
* llama_toolchain/distribution -> llama_toolchain/core
* Add termcolor
* update paths
* Add a log just for consistency
* chmod +x scripts
* Fix api dependencies not getting added to configuration
* missing import lol
* Delete utils.py; move to agentic system
* Support downloading of URLs for attachments for code interpreter
* Simplify and generalize `llama api build` yay
* Update `llama stack configure` to be very simple also
* Fix stack start
* Allow building an "adhoc" distribution
* Remote `llama api []` subcommands
* Fixes to llama stack commands and update docs
* Update documentation again and add error messages to llama stack start
* llama stack start -> llama stack run
* Change name of build for less confusion
* Add pyopenapi fork to the repository, update RFC assets
* Remove conflicting annotation
* Added a "--raw" option for model template printing
---------
Co-authored-by: Hardik Shah <hjshah@fb.com>
Co-authored-by: Ashwin Bharambe <ashwin@meta.com>
Co-authored-by: Dalton Flanagan <6599399+dltn@users.noreply.github.com>
* Add distribution CLI scaffolding
* More progress towards `llama distribution install`
* getting closer to a distro definition, distro install + configure works
* Distribution server now functioning
* read existing configuration, save enums properly
* Remove inference uvicorn server entrypoint and llama inference CLI command
* updated dependency and client model name
* Improved exception handling
* local imports for faster cli
* undo a typo, add a passthrough distribution
* implement full-passthrough in the server
* add safety adapters, configuration handling, server + clients
* cleanup, moving stuff to common, nuke utils
* Add a Path() wrapper at the earliest place
* fixes
* Bring agentic system api to toolchain
Add adapter dependencies and resolve adapters using a topological sort
* refactor to reduce size of `agentic_system`
* move straggler files and fix some important existing bugs
* ApiSurface -> Api
* refactor a method out
* Adapter -> Provider
* Make each inference provider into its own subdirectory
* installation fixes
* Rename Distribution -> DistributionSpec, simplify RemoteProviders
* dict key instead of attr
* update inference config to take model and not model_dir
* Fix passthrough streaming, send headers properly not part of body :facepalm
* update safety to use model sku ids and not model dirs
* Update cli_reference.md
* minor fixes
* add DistributionConfig, fix a bug in model download
* Make install + start scripts do proper configuration automatically
* Update CLI_reference
* Nuke fp8_requirements, fold fbgemm into common requirements
* Update README, add newline between API surface configurations
* Refactor download functionality out of the Command so can be reused
* Add `llama model download` alias for `llama download`
* Show message about checksum file so users can check themselves
* Simpler intro statements
* get ollama working
* Reduce a bunch of dependencies from toolchain
Some improvements to the distribution install script
* Avoid using `conda run` since it buffers everything
* update dependencies and rely on LLAMA_TOOLCHAIN_DIR for dev purposes
* add validation for configuration input
* resort imports
* make optional subclasses default to yes for configuration
* Remove additional_pip_packages; move deps to providers
* for inline make 8b model the default
* Add scripts to MANIFEST
* allow installing from test.pypi.org
* Fix#2 to help with testing packages
* Must install llama-models at that same version first
* fix PIP_ARGS
---------
Co-authored-by: Hardik Shah <hjshah@fb.com>
Co-authored-by: Hardik Shah <hjshah@meta.com>
* fix non-streaming api in inference server
* unit test for inline inference
* Added non-streaming ollama inference impl
* add streaming support for ollama inference with tests
* addressing comments
---------
Co-authored-by: Hardik Shah <hjshah@fb.com>