Commit graph

68 commits

Author SHA1 Message Date
raghotham
e2a5a2e10d
first version of readthedocs (#278) 2024-10-22 10:15:58 +05:30
Ashwin Bharambe
1944405dca
Update new_api_provider.md 2024-10-21 14:02:51 -07:00
Ashwin Bharambe
606c48309e Small updates to encourage integration testing 2024-10-21 13:52:33 -07:00
Xi Yan
3a7884345a
Update new_api_provider.md 2024-10-21 13:41:56 -07:00
Xi Yan
25b37c9ff7
Update new_api_provider.md 2024-10-21 13:41:46 -07:00
Xi Yan
23210e8679
llama stack distributions / templates / docker refactor (#266)
* docker compose ollama

* comment

* update compose file

* readme for distributions

* readme

* move distribution folders

* move distribution/templates to distributions/

* rename

* kill distribution/templates

* readme

* readme

* build/developer cookbook/new api provider

* developer cookbook

* readme

* readme

* [bugfix] fix case for agent when memory bank registered without specifying provider_id (#264)

* fix case where memory bank is registered without provider_id

* memory test

* agents unit test

* Add an option to not use elastic agents for meta-reference inference (#269)

* Allow overridding checkpoint_dir via config

* Small rename

* Make all methods `async def` again; add completion() for meta-reference (#270)

PR #201 had made several changes while trying to fix issues with getting the stream=False branches of inference and agents API working. As part of this, it made a change which was slightly gratuitous. Namely, making chat_completion() and brethren "def" instead of "async def".

The rationale was that this allowed the user (within llama-stack) of this to use it as:

```
async for chunk in api.chat_completion(params)
```

However, it causes unnecessary confusion for several folks. Given that clients (e.g., llama-stack-apps) anyway use the SDK methods (which are completely isolated) this choice was not ideal. Let's revert back so the call now looks like:

```
async for chunk in await api.chat_completion(params)
```

Bonus: Added a completion() implementation for the meta-reference provider. Technically should have been another PR :)

* Improve an important error message

* update ollama for llama-guard3

* Add vLLM inference provider for OpenAI compatible vLLM server (#178)

This PR adds vLLM inference provider for OpenAI compatible vLLM server.

* Create .readthedocs.yaml

Trying out readthedocs

* Update event_logger.py (#275)

spelling error

* vllm

* build templates

* delete templates

* tmp add back build to avoid merge conflicts

* vllm

* vllm

---------

Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com>
Co-authored-by: Ashwin Bharambe <ashwin@meta.com>
Co-authored-by: Yuan Tang <terrytangyuan@gmail.com>
Co-authored-by: raghotham <rsm@meta.com>
Co-authored-by: nehal-a2z <nehal@coderabbit.ai>
2024-10-21 11:17:53 -07:00
Ashwin Bharambe
2089427d60
Make all methods async def again; add completion() for meta-reference (#270)
PR #201 had made several changes while trying to fix issues with getting the stream=False branches of inference and agents API working. As part of this, it made a change which was slightly gratuitous. Namely, making chat_completion() and brethren "def" instead of "async def".

The rationale was that this allowed the user (within llama-stack) of this to use it as:

```
async for chunk in api.chat_completion(params)
```

However, it causes unnecessary confusion for several folks. Given that clients (e.g., llama-stack-apps) anyway use the SDK methods (which are completely isolated) this choice was not ideal. Let's revert back so the call now looks like:

```
async for chunk in await api.chat_completion(params)
```

Bonus: Added a completion() implementation for the meta-reference provider. Technically should have been another PR :)
2024-10-18 20:50:59 -07:00
Xi Yan
02be26098a getting started 2024-10-16 23:56:21 -07:00
Xi Yan
cf9e5b76b2
Update getting_started.md 2024-10-16 23:52:29 -07:00
Xi Yan
7cc47da8f2
Update getting_started.md 2024-10-16 23:50:31 -07:00
Xi Yan
d787d1e84f
config templates restructure, docs (#262)
* wip

* config templates

* readmes
2024-10-16 23:25:10 -07:00
ATH
319a6b5f83
Update getting_started.md (#260) 2024-10-16 18:05:36 -07:00
Matthieu FRONTON
770647dede
Fix broken rendering in Google Colab (#247) 2024-10-15 15:41:49 -07:00
Yuan Tang
2128e61da2
Fix incorrect completion() signature for Databricks provider (#236) 2024-10-11 08:47:57 -07:00
Xi Yan
7ff5800dea generate openapi 2024-10-10 15:30:34 -07:00
Ashwin Bharambe
6bb57e72a7
Remove "routing_table" and "routing_key" concepts for the user (#201)
This PR makes several core changes to the developer experience surrounding Llama Stack.

Background: PR #92 introduced the notion of "routing" to the Llama Stack. It introduces three object types: (1) models, (2) shields and (3) memory banks. Each of these objects can be associated with a distinct provider. So you can get model A to be inferenced locally while model B, C can be inference remotely (e.g.)

However, this had a few drawbacks:

you could not address the provider instances -- i.e., if you configured "meta-reference" with a given model, you could not assign an identifier to this instance which you could re-use later.
the above meant that you could not register a "routing_key" (e.g. model) dynamically and say "please use this existing provider I have already configured" for a new model.
the terms "routing_table" and "routing_key" were exposed directly to the user. in my view, this is way too much overhead for a new user (which almost everyone is.) people come to the stack wanting to do ML and encounter a completely unexpected term.
What this PR does: This PR structures the run config with only a single prominent key:

- providers
Providers are instances of configured provider types. Here's an example which shows two instances of the remote::tgi provider which are serving two different models.

providers:
  inference:
  - provider_id: foo
    provider_type: remote::tgi
    config: { ... }
  - provider_id: bar
    provider_type: remote::tgi
    config: { ... }
Secondly, the PR adds dynamic registration of { models | shields | memory_banks } to the API surface. The distribution still acts like a "routing table" (as previously) except that it asks the backing providers for a listing of these objects. For example it asks a TGI or Ollama inference adapter what models it is serving. Only the models that are being actually served can be requested by the user for inference. Otherwise, the Stack server will throw an error.

When dynamically registering these objects, you can use the provider IDs shown above. Info about providers can be obtained using the Api.inspect set of endpoints (/providers, /routes, etc.)

The above examples shows the correspondence between inference providers and models registry items. Things work similarly for the safety <=> shields and memory <=> memory_banks pairs.

Registry: This PR also makes it so that Providers need to implement additional methods for registering and listing objects. For example, each Inference provider is now expected to implement the ModelsProtocolPrivate protocol (naming is not great!) which consists of two methods

register_model
list_models
The goal is to inform the provider that a certain model needs to be supported so the provider can make any relevant backend changes if needed (or throw an error if the model cannot be supported.)

There are many other cleanups included some of which are detailed in a follow-up comment.
2024-10-10 10:24:13 -07:00
Dalton Flanagan
8c3010553f
Fix agents path in generate.py 2024-10-10 11:41:03 -04:00
Xi Yan
6b094b72d3
Update cli_reference.md 2024-10-08 15:32:06 -07:00
Xi Yan
ce70d21f65
Add files via upload 2024-10-08 15:29:19 -07:00
Xi Yan
2366e18873
refactor docs (#209) 2024-10-07 10:21:26 -07:00
Xi Yan
29138a5167
Update getting_started.md 2024-10-05 12:28:02 -07:00
Xi Yan
6d4013ac99
Update getting_started.md 2024-10-05 12:14:59 -07:00
raghotham
00ed9a410b
Update getting_started.md
update discord invite link
2024-10-03 23:28:43 -07:00
Ashwin Bharambe
210b71b0ba
fix prompt guard (#177)
Several other fixes to configure. Add support for 1b/3b models in ollama.
2024-10-03 11:07:53 -07:00
Ashwin Bharambe
8d049000e3 Add an introspection "Api.inspect" API 2024-10-02 15:41:14 -07:00
Adrian Cole
01d93be948
Adds markdown-link-check and fixes a broken link (#165)
Signed-off-by: Adrian Cole <adrian.cole@elastic.co>
Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com>
2024-10-02 14:26:20 -07:00
Ashwin Bharambe
fe4aabd690 provider_id => provider_type, adapter_id => adapter_type 2024-10-02 14:05:59 -07:00
Ashwin Bharambe
4a75d922a9 Make Llama Guard 1B the default 2024-10-02 09:48:26 -07:00
Russell Bryant
43744455d7
docs: Note how to use podman (#130)
Podman works as an alternative to Docker, but it wasn't immediately
obvious going through the quickstart how to enable it aside from
installing the docker alias. Add a note that points users to the
correct env var to use podman.

Signed-off-by: Russell Bryant <rbryant@redhat.com>
2024-09-27 14:00:40 -07:00
Deep Doshi
557ae38289
Update getting_started.ipynb (#117)
Update hyperlink to `llama-stack-apps` to point it correctly to the desired github repo
2024-09-26 14:43:04 -07:00
Xi Yan
2802ac8e9d
add llama-stack.png 2024-09-26 11:17:46 -07:00
Karthi Keyan
995a1a1d00
Reordered pip install and llama model download (#112)
Only after pip install step, llama cli command could be used (which is also specified in the notebook), so its common sense to put it before
2024-09-26 10:37:15 -07:00
Mark Sze
3c99f08267
minor typo and HuggingFace -> Hugging Face (#113) 2024-09-26 09:48:23 -07:00
machina-source
37be3fb184
Fix links & format (#104)
Fix broken examples link to llama-stack-apps repo
Remove extra space in README.md
2024-09-25 14:18:46 -07:00
Abhishek
851c30597a
chore (doc): fix typo for setup instructionllama-stack to llama-stack-apps (#103) 2024-09-25 13:27:55 -07:00
Ashwin Bharambe
56aed59eb4
Support for Llama3.2 models and Swift SDK (#98) 2024-09-25 10:29:58 -07:00
Ashwin Bharambe
ec4fc800cc
[API Updates] Model / shield / memory-bank routing + agent persistence + support for private headers (#92)
This is yet another of those large PRs (hopefully we will have less and less of them as things mature fast). This one introduces substantial improvements and some simplifications to the stack.

Most important bits:

* Agents reference implementation now has support for session / turn persistence. The default implementation uses sqlite but there's also support for using Redis.

* We have re-architected the structure of the Stack APIs to allow for more flexible routing. The motivating use cases are:
  - routing model A to ollama and model B to a remote provider like Together
  - routing shield A to local impl while shield B to a remote provider like Bedrock
  - routing a vector memory bank to Weaviate while routing a keyvalue memory bank to Redis

* Support for provider specific parameters to be passed from the clients. A client can pass data using `x_llamastack_provider_data` parameter which can be type-checked and provided to the Adapter implementations.
2024-09-23 14:22:22 -07:00
Xi Yan
06abd7e6c8 update MemoryToolDefinition 2024-09-20 17:51:53 -07:00
Ashwin Bharambe
942cb87a3c remove apis/stack.py 2024-09-20 09:37:08 -07:00
Xi Yan
543222ac39 update inference prompt msg 2024-09-19 12:03:24 -07:00
Xi Yan
880ed37026
Update cli_reference.md 2024-09-18 23:05:24 -07:00
Xi Yan
5c4a2dc0e1
Update getting_started.md 2024-09-18 23:03:14 -07:00
Xi Yan
f3f5873e9e regenerate openapi spec 2024-09-18 19:28:05 -07:00
Xi Yan
5ec64ac68c moving rfc->docs 2024-09-18 16:54:24 -07:00
Xi Yan
2c1ad10710 move openapi from rfcs->docs 2024-09-18 16:09:17 -07:00
Xi Yan
45e20ff431 update getting started 2024-09-18 15:40:48 -07:00
Xi Yan
2f9e952813 update getting started guide 2024-09-18 15:35:54 -07:00
Xi Yan
6b21523c28
CLI - add back build wizard, configure with name instead of build.yaml (#74)
* add back wizard for build

* conda build path move

* polish message

* run with name only

* prompt for build

* improve comments

* update msgs

* add new lines

* move build.yaml

* address comments

* validator for providers

* move imports

* Please enter -> enter

* comments, get started guide

* nits

* fix cprint import

* fix imports
2024-09-18 11:41:56 -07:00
Dalton Flanagan
eea0a83bd1
Update getting_started.md
config is now a positional argument
2024-09-18 00:47:41 -04:00
Ashwin Bharambe
25adc83de8 Fix for safety 2024-09-17 19:56:58 -07:00