This PR makes several core changes to the developer experience surrounding Llama Stack.
Background: PR #92 introduced the notion of "routing" to the Llama Stack. It introduces three object types: (1) models, (2) shields and (3) memory banks. Each of these objects can be associated with a distinct provider. So you can get model A to be inferenced locally while model B, C can be inference remotely (e.g.)
However, this had a few drawbacks:
you could not address the provider instances -- i.e., if you configured "meta-reference" with a given model, you could not assign an identifier to this instance which you could re-use later.
the above meant that you could not register a "routing_key" (e.g. model) dynamically and say "please use this existing provider I have already configured" for a new model.
the terms "routing_table" and "routing_key" were exposed directly to the user. in my view, this is way too much overhead for a new user (which almost everyone is.) people come to the stack wanting to do ML and encounter a completely unexpected term.
What this PR does: This PR structures the run config with only a single prominent key:
- providers
Providers are instances of configured provider types. Here's an example which shows two instances of the remote::tgi provider which are serving two different models.
providers:
inference:
- provider_id: foo
provider_type: remote::tgi
config: { ... }
- provider_id: bar
provider_type: remote::tgi
config: { ... }
Secondly, the PR adds dynamic registration of { models | shields | memory_banks } to the API surface. The distribution still acts like a "routing table" (as previously) except that it asks the backing providers for a listing of these objects. For example it asks a TGI or Ollama inference adapter what models it is serving. Only the models that are being actually served can be requested by the user for inference. Otherwise, the Stack server will throw an error.
When dynamically registering these objects, you can use the provider IDs shown above. Info about providers can be obtained using the Api.inspect set of endpoints (/providers, /routes, etc.)
The above examples shows the correspondence between inference providers and models registry items. Things work similarly for the safety <=> shields and memory <=> memory_banks pairs.
Registry: This PR also makes it so that Providers need to implement additional methods for registering and listing objects. For example, each Inference provider is now expected to implement the ModelsProtocolPrivate protocol (naming is not great!) which consists of two methods
register_model
list_models
The goal is to inform the provider that a certain model needs to be supported so the provider can make any relevant backend changes if needed (or throw an error if the model cannot be supported.)
There are many other cleanups included some of which are detailed in a follow-up comment.
Podman works as an alternative to Docker, but it wasn't immediately
obvious going through the quickstart how to enable it aside from
installing the docker alias. Add a note that points users to the
correct env var to use podman.
Signed-off-by: Russell Bryant <rbryant@redhat.com>
This is yet another of those large PRs (hopefully we will have less and less of them as things mature fast). This one introduces substantial improvements and some simplifications to the stack.
Most important bits:
* Agents reference implementation now has support for session / turn persistence. The default implementation uses sqlite but there's also support for using Redis.
* We have re-architected the structure of the Stack APIs to allow for more flexible routing. The motivating use cases are:
- routing model A to ollama and model B to a remote provider like Together
- routing shield A to local impl while shield B to a remote provider like Bedrock
- routing a vector memory bank to Weaviate while routing a keyvalue memory bank to Redis
* Support for provider specific parameters to be passed from the clients. A client can pass data using `x_llamastack_provider_data` parameter which can be type-checked and provided to the Adapter implementations.
* add back wizard for build
* conda build path move
* polish message
* run with name only
* prompt for build
* improve comments
* update msgs
* add new lines
* move build.yaml
* address comments
* validator for providers
* move imports
* Please enter -> enter
* comments, get started guide
* nits
* fix cprint import
* fix imports
* API Keys passed from Client instead of distro configuration
* delete distribution registry
* Rename the "package" word away
* Introduce a "Router" layer for providers
Some providers need to be factorized and considered as thin routing
layers on top of other providers. Consider two examples:
- The inference API should be a routing layer over inference providers,
routed using the "model" key
- The memory banks API is another instance where various memory bank
types will be provided by independent providers (e.g., a vector store
is served by Chroma while a keyvalue memory can be served by Redis or
PGVector)
This commit introduces a generalized routing layer for this purpose.
* update `apis_to_serve`
* llama_toolchain -> llama_stack
* Codemod from llama_toolchain -> llama_stack
- added providers/registry
- cleaned up api/ subdirectories and moved impls away
- restructured api/api.py
- from llama_stack.apis.<api> import foo should work now
- update imports to do llama_stack.apis.<api>
- update many other imports
- added __init__, fixed some registry imports
- updated registry imports
- create_agentic_system -> create_agent
- AgenticSystem -> Agent
* Moved some stuff out of common/; re-generated OpenAPI spec
* llama-toolchain -> llama-stack (hyphens)
* add control plane API
* add redis adapter + sqlite provider
* move core -> distribution
* Some more toolchain -> stack changes
* small naming shenanigans
* Removing custom tool and agent utilities and moving them client side
* Move control plane to distribution server for now
* Remove control plane from API list
* no codeshield dependency randomly plzzzzz
* Add "fire" as a dependency
* add back event loggers
* stack configure fixes
* use brave instead of bing in the example client
* add init file so it gets packaged
* add init files so it gets packaged
* Update MANIFEST
* bug fix
---------
Co-authored-by: Hardik Shah <hjshah@fb.com>
Co-authored-by: Xi Yan <xiyan@meta.com>
Co-authored-by: Ashwin Bharambe <ashwin@meta.com>
* Use huggingface_hub inference client for TGI inference
* Update the default value for TGI URL
* Use InferenceClient.text_generation for TGI inference
* Fixes post-review and split TGI adapter into local and Inference Endpoints ones
* Update CLI reference and add typing
* Rename TGI Adapter class
* Use HfApi to get the namespace when not provide in the hf endpoint name
* Remove unecessary method argument
* Improve TGI adapter initialization condition
* Move helper into impl file + fix merging conflicts
* add tools to chat completion request
* use templates for generating system prompts
* Moved ToolPromptFormat and jinja templates to llama_models.llama3.api
* <WIP> memory changes
- inlined AgenticSystemInstanceConfig so API feels more ergonomic
- renamed it to AgentConfig, AgentInstance -> Agent
- added a MemoryConfig and `memory` parameter
- added `attachments` to input and `output_attachments` to the response
- some naming changes
* InterleavedTextAttachment -> InterleavedTextMedia, introduce memory tool
* flesh out memory banks API
* agentic loop has a RAG implementation
* faiss provider implementation
* memory client works
* re-work tool definitions, fix FastAPI issues, fix tool regressions
* fix agentic_system utils
* basic RAG seems to work
* small bug fixes for inline attachments
* Refactor custom tool execution utilities
* Bug fix, show memory retrieval steps in EventLogger
* No need for api_key for Remote providers
* add special unicode character ↵ to showcase newlines in model prompt templates
* remove api.endpoints imports
* combine datatypes.py and endpoints.py into api.py
* Attachment / add TTL api
* split batch_inference from inference
* minor import fixes
* use a single impl for ChatFormat.decode_assistant_mesage
* use interleaved_text_media_as_str() utilityt
* Fix api.datatypes imports
* Add blobfile for tiktoken
* Add ToolPromptFormat to ChatFormat.encode_message so that tools are encoded properly
* templates take optional --format={json,function_tag}
* Rag Updates
* Add `api build` subcommand -- WIP
* fix
* build + run image seems to work
* <WIP> adapters
* bunch more work to make adapters work
* api build works for conda now
* ollama remote adapter works
* Several smaller fixes to make adapters work
Also, reorganized the pattern of __init__ inside providers so
configuration can stay lightweight
* llama distribution -> llama stack + containers (WIP)
* All the new CLI for api + stack work
* Make Fireworks and Together into the Adapter format
* Some quick fixes to the CLI behavior to make it consistent
* Updated README phew
* Update cli_reference.md
* llama_toolchain/distribution -> llama_toolchain/core
* Add termcolor
* update paths
* Add a log just for consistency
* chmod +x scripts
* Fix api dependencies not getting added to configuration
* missing import lol
* Delete utils.py; move to agentic system
* Support downloading of URLs for attachments for code interpreter
* Simplify and generalize `llama api build` yay
* Update `llama stack configure` to be very simple also
* Fix stack start
* Allow building an "adhoc" distribution
* Remote `llama api []` subcommands
* Fixes to llama stack commands and update docs
* Update documentation again and add error messages to llama stack start
* llama stack start -> llama stack run
* Change name of build for less confusion
* Add pyopenapi fork to the repository, update RFC assets
* Remove conflicting annotation
* Added a "--raw" option for model template printing
---------
Co-authored-by: Hardik Shah <hjshah@fb.com>
Co-authored-by: Ashwin Bharambe <ashwin@meta.com>
Co-authored-by: Dalton Flanagan <6599399+dltn@users.noreply.github.com>