Commit graph

397 commits

Author SHA1 Message Date
Dalton Flanagan
8eceebec98
Update iOS inference instructions for new quantization 2024-10-24 14:47:27 -04:00
Ashwin Bharambe
8aa8847b4a Bump version to 0.0.44 2024-10-24 08:41:39 -07:00
Ashwin Bharambe
7afe51c84d
New quantized models (#301) 2024-10-24 08:38:56 -07:00
Ashwin Bharambe
05a8d47b98 Add a meta-reference-quantized-gpu distribution 2024-10-23 21:45:50 -07:00
Xi Yan
f5dcc03742 use pytorch/pytorch as base 2024-10-23 20:22:00 -07:00
Xi Yan
0cec86453b
Fix issue w/ routing_table api getting added when router api is not specified (#298)
* fix issue w/ enforcing api

* cleanup

* inference only yaml
2024-10-23 15:27:22 -07:00
Dinesh Yeduguru
21f2e9adf5
dont set num_predict for all providers (#294) 2024-10-23 11:44:04 -07:00
Ashwin Bharambe
ffb561070d
Support structured output for Together (#289) 2024-10-22 22:36:38 -07:00
Sarthak Deshpande
2e5e46d896
Added tests for persistence (#274) 2024-10-22 19:41:46 -07:00
Xi Yan
821810657f
[Evals API][2/n] datasets / datasetio meta-reference implementation (#288)
* skeleton dataset / datasetio

* dataset datasetio

* config

* address comments

* delete dataset_utils

* address comments

* naming fix
2024-10-22 16:12:16 -07:00
Sarthak Deshpande
8a01b9e40c
Added implementations for get_agents_session, delete_agents_session and delete_agents (#267) 2024-10-22 13:50:43 -07:00
Suraj Subramanian
b81a3bd46a
Fix import conflict for SamplingParams (#285)
Conflict between llama_models.llama3.api.datatypes.SamplingParams and vllm.sampling_params.SamplingParams results in errors while processing VLLM engine requests
2024-10-22 12:56:00 -07:00
Ashwin Bharambe
c06718fbd5
Add support for Structured Output / Guided decoding (#281)
Added support for structured output in the API and added a reference implementation for meta-reference.

A few notes:

* Two formats are specified in the API: Json schema and EBNF based grammar
* Implementation only supports Json for now
We use lm-format-enhancer to provide the implementation right now but may change this especially because BNF grammars aren't supported by that library.
Fireworks has support for structured output and Together has limited supported for it too. Subsequent PRs will add these changes. We would like all our inference providers to provide structured output for llama models since it is an extremely important and highly sought-after need by the developers.
2024-10-22 12:53:34 -07:00
Anush
4c3d33e6f4
feat: Qdrant Vector index support (#221)
This PR adds support for Qdrant - https://qdrant.tech/ to be used as a vector memory.

I've unit-tested the methods to confirm that they work as intended.

To run Qdrant

```
docker run -p 6333:6333 qdrant/qdrant
```
2024-10-22 12:50:19 -07:00
Suraj Subramanian
668a495aba
Add REST api example for chat_completion (#286) 2024-10-22 10:35:20 -07:00
Xi Yan
e45f121c77
[Evals API] [1/n] Initial API (#287)
* type system api

* datasets api

* fix

* datasetio api

* kill reward scoring

* scoring functions + evals

* move jobs, fix errors
2024-10-22 09:31:19 -07:00
Xi Yan
b279d3bc58
Update README.md 2024-10-22 08:01:33 -07:00
Dinesh Yeduguru
1d241bf3fe
add completion() for ollama (#280) 2024-10-21 22:26:33 -07:00
raghotham
e2a5a2e10d
first version of readthedocs (#278) 2024-10-22 10:15:58 +05:30
Xi Yan
dbb5ce43fc Bump version to 0.0.43 2024-10-21 19:10:01 -07:00
Xi Yan
a2ff74a686 telemetry WARNING->WARN fix 2024-10-21 18:52:48 -07:00
Xi Yan
b1451afbc8
Update README.md 2024-10-21 18:21:30 -07:00
Xi Yan
4d2bd2d39e
add more distro templates (#279)
* verify dockers

* together distro verified

* readme

* fireworks distro

* fireworks compose up

* fireworks verified
2024-10-21 18:15:08 -07:00
Xi Yan
cf27d19dd5 fix sse_generator async 2024-10-21 14:03:42 -07:00
Ashwin Bharambe
1944405dca
Update new_api_provider.md 2024-10-21 14:02:51 -07:00
Ashwin Bharambe
606c48309e Small updates to encourage integration testing 2024-10-21 13:52:33 -07:00
Xi Yan
cb203b14b4 update README.md 2024-10-21 13:51:39 -07:00
Xi Yan
3a7884345a
Update new_api_provider.md 2024-10-21 13:41:56 -07:00
Xi Yan
25b37c9ff7
Update new_api_provider.md 2024-10-21 13:41:46 -07:00
Xi Yan
af75618348 remove distribution/templates 2024-10-21 13:23:58 -07:00
Xi Yan
23210e8679
llama stack distributions / templates / docker refactor (#266)
* docker compose ollama

* comment

* update compose file

* readme for distributions

* readme

* move distribution folders

* move distribution/templates to distributions/

* rename

* kill distribution/templates

* readme

* readme

* build/developer cookbook/new api provider

* developer cookbook

* readme

* readme

* [bugfix] fix case for agent when memory bank registered without specifying provider_id (#264)

* fix case where memory bank is registered without provider_id

* memory test

* agents unit test

* Add an option to not use elastic agents for meta-reference inference (#269)

* Allow overridding checkpoint_dir via config

* Small rename

* Make all methods `async def` again; add completion() for meta-reference (#270)

PR #201 had made several changes while trying to fix issues with getting the stream=False branches of inference and agents API working. As part of this, it made a change which was slightly gratuitous. Namely, making chat_completion() and brethren "def" instead of "async def".

The rationale was that this allowed the user (within llama-stack) of this to use it as:

```
async for chunk in api.chat_completion(params)
```

However, it causes unnecessary confusion for several folks. Given that clients (e.g., llama-stack-apps) anyway use the SDK methods (which are completely isolated) this choice was not ideal. Let's revert back so the call now looks like:

```
async for chunk in await api.chat_completion(params)
```

Bonus: Added a completion() implementation for the meta-reference provider. Technically should have been another PR :)

* Improve an important error message

* update ollama for llama-guard3

* Add vLLM inference provider for OpenAI compatible vLLM server (#178)

This PR adds vLLM inference provider for OpenAI compatible vLLM server.

* Create .readthedocs.yaml

Trying out readthedocs

* Update event_logger.py (#275)

spelling error

* vllm

* build templates

* delete templates

* tmp add back build to avoid merge conflicts

* vllm

* vllm

---------

Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com>
Co-authored-by: Ashwin Bharambe <ashwin@meta.com>
Co-authored-by: Yuan Tang <terrytangyuan@gmail.com>
Co-authored-by: raghotham <rsm@meta.com>
Co-authored-by: nehal-a2z <nehal@coderabbit.ai>
2024-10-21 11:17:53 -07:00
nehal-a2z
c995219731
Update event_logger.py (#275)
spelling error
2024-10-21 10:46:53 -07:00
raghotham
cae5b0708b
Create .readthedocs.yaml
Trying out readthedocs
2024-10-21 11:48:19 +05:30
Yuan Tang
a27a2cd2af
Add vLLM inference provider for OpenAI compatible vLLM server (#178)
This PR adds vLLM inference provider for OpenAI compatible vLLM server.
2024-10-20 18:43:25 -07:00
Ashwin Bharambe
59c43736e8 update ollama for llama-guard3 2024-10-19 17:26:18 -07:00
Ashwin Bharambe
8cfbb9d38b Improve an important error message 2024-10-19 17:19:54 -07:00
Ashwin Bharambe
2089427d60
Make all methods async def again; add completion() for meta-reference (#270)
PR #201 had made several changes while trying to fix issues with getting the stream=False branches of inference and agents API working. As part of this, it made a change which was slightly gratuitous. Namely, making chat_completion() and brethren "def" instead of "async def".

The rationale was that this allowed the user (within llama-stack) of this to use it as:

```
async for chunk in api.chat_completion(params)
```

However, it causes unnecessary confusion for several folks. Given that clients (e.g., llama-stack-apps) anyway use the SDK methods (which are completely isolated) this choice was not ideal. Let's revert back so the call now looks like:

```
async for chunk in await api.chat_completion(params)
```

Bonus: Added a completion() implementation for the meta-reference provider. Technically should have been another PR :)
2024-10-18 20:50:59 -07:00
Ashwin Bharambe
95a96afe34 Small rename 2024-10-18 14:41:38 -07:00
Ashwin Bharambe
71a905e93f Allow overridding checkpoint_dir via config 2024-10-18 14:28:06 -07:00
Ashwin Bharambe
33afd34e6f
Add an option to not use elastic agents for meta-reference inference (#269) 2024-10-18 12:51:10 -07:00
Xi Yan
be3c5c034d
[bugfix] fix case for agent when memory bank registered without specifying provider_id (#264)
* fix case where memory bank is registered without provider_id

* memory test

* agents unit test
2024-10-17 17:28:17 -07:00
Ashwin Bharambe
9fcf5d58e0 Allow overriding MODEL_IDS for inference test 2024-10-17 10:03:27 -07:00
Xi Yan
02be26098a getting started 2024-10-16 23:56:21 -07:00
Xi Yan
cf9e5b76b2
Update getting_started.md 2024-10-16 23:52:29 -07:00
Xi Yan
7cc47da8f2
Update getting_started.md 2024-10-16 23:50:31 -07:00
Xi Yan
d787d1e84f
config templates restructure, docs (#262)
* wip

* config templates

* readmes
2024-10-16 23:25:10 -07:00
Tam
a07dfffbbf
initial changes (#261)
Update the parsing logic for comma-separated list and download function
2024-10-16 23:15:59 -07:00
ATH
319a6b5f83
Update getting_started.md (#260) 2024-10-16 18:05:36 -07:00
Xi Yan
c4d5d6bb91
Docker compose scripts for remote adapters (#241)
* tgi docker compose

* path

* wait for tgi server to start before starting server

* update provider-id

* move scripts to distribution/ folder

* add readme

* readme
2024-10-15 16:32:53 -07:00
Matthieu FRONTON
770647dede
Fix broken rendering in Google Colab (#247) 2024-10-15 15:41:49 -07:00