Commit graph

172 commits

Author SHA1 Message Date
Ashwin Bharambe
b7d2b83d55 Allow passing provider_registry to resolve_impls() 2024-10-28 11:58:16 -07:00
Dalton Flanagan
44c05c6e7d add vision instruct models for fireworks 2024-10-27 17:54:54 +00:00
Dinesh Yeduguru
9b85d9a841
completion() for fireworks (#329) 2024-10-25 16:12:10 -07:00
Dinesh Yeduguru
7ec79f3b9d
completion() for together (#324)
* completion() for together

* test fixes

* fix client building
2024-10-25 14:21:12 -07:00
Xi Yan
abdf7cddf3
[Evals API][4/n] evals with generation meta-reference impl (#303)
* wip

* dataset validation

* test_scoring

* cleanup

* clean up test

* comments

* error checking

* dataset client

* test client:

* datasetio client

* clean up

* basic scoring function works

* scorer wip

* equality scorer

* score batch impl

* score batch

* update scoring test

* refactor

* validate scorer input

* address comments

* evals with generation

* add all rows scores to ScoringResult

* minor typing

* bugfix

* scoring function def rename

* rebase name

* refactor

* address comments

* Update iOS inference instructions for new quantization

* Small updates to quantization config

* Fix score threshold in faiss

* Bump version to 0.0.45

* Handle both ipv6 and ipv4 interfaces together

* update manifest for build templates

* Update getting_started.md

* chatcompletion & completion input type validation

* inclusion->subsetof

* error checking

* scoring_function -> scoring_fn rename, scorer -> scoring_fn rename

* address comments

* [Evals API][5/n] fixes to generate openapi spec (#323)

* generate openapi

* typing comment, dataset -> dataset_id

* remove custom type

* sample eval run.yaml

---------

Co-authored-by: Dalton Flanagan <6599399+dltn@users.noreply.github.com>
Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com>
2024-10-25 13:12:39 -07:00
Sachin Mehta
c05fbf14b3
Added hadamard transform for spinquant (#326)
* Added hadamard transform for spinquant

* Changed from config to model_args

* Added an assertion for model args

* Use enum.value to check against str

* pre-commit

---------

Co-authored-by: Sachin Mehta <sacmehta@fb.com>
Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com>
2024-10-25 12:58:48 -07:00
Xi Yan
07f9bf723f
fix broken --list-templates with adding build.yaml files for packaging (#327)
* add build files to templates

* fix templates

* manifest

* symlink

* symlink

* precommit

* change everything to docker build.yaml

* remove image_type in templates

* fix build from templates CLI

* fix readmes
2024-10-25 12:51:22 -07:00
Ashwin Bharambe
afae4e3d8e Update docker build flow a little 2024-10-25 10:06:21 -07:00
Ashwin Bharambe
5bed6c276c Move function around 2024-10-25 09:18:22 -07:00
Ashwin Bharambe
70d59b0f5d Make vllm inference better
Tests still don't pass completely (some hang) so I think there are some
potential threading issues maybe
2024-10-24 22:52:47 -07:00
Xi Yan
cb43caa2c3 start_container.sh prefix llamastack->distribution name 2024-10-24 21:29:17 -07:00
Sarthak Deshpande
df141b6ef3
Fix for get_agents_session (#300) 2024-10-24 18:36:27 -07:00
Dinesh Yeduguru
3e1c3fdb3f
completion() for tgi (#295) 2024-10-24 16:02:41 -07:00
Xi Yan
cb84034567
[Evals API][3/n] scoring_functions / scoring meta-reference implementations (#296)
* wip

* dataset validation

* test_scoring

* cleanup

* clean up test

* comments

* error checking

* dataset client

* test client:

* datasetio client

* clean up

* basic scoring function works

* scorer wip

* equality scorer

* score batch impl

* score batch

* update scoring test

* refactor

* validate scorer input

* address comments

* add all rows scores to ScoringResult

* bugfix

* scoring function def rename
2024-10-24 14:52:30 -07:00
Ashwin Bharambe
94728d6983 Handle both ipv6 and ipv4 interfaces together 2024-10-24 13:59:01 -07:00
Ashwin Bharambe
205bcfdd4e Fix score threshold in faiss 2024-10-24 12:11:58 -07:00
Ashwin Bharambe
161aef0aae Small updates to quantization config 2024-10-24 12:08:56 -07:00
Dalton Flanagan
8eceebec98
Update iOS inference instructions for new quantization 2024-10-24 14:47:27 -04:00
Ashwin Bharambe
7afe51c84d
New quantized models (#301) 2024-10-24 08:38:56 -07:00
Ashwin Bharambe
05a8d47b98 Add a meta-reference-quantized-gpu distribution 2024-10-23 21:45:50 -07:00
Xi Yan
0cec86453b
Fix issue w/ routing_table api getting added when router api is not specified (#298)
* fix issue w/ enforcing api

* cleanup

* inference only yaml
2024-10-23 15:27:22 -07:00
Dinesh Yeduguru
21f2e9adf5
dont set num_predict for all providers (#294) 2024-10-23 11:44:04 -07:00
Ashwin Bharambe
ffb561070d
Support structured output for Together (#289) 2024-10-22 22:36:38 -07:00
Sarthak Deshpande
2e5e46d896
Added tests for persistence (#274) 2024-10-22 19:41:46 -07:00
Xi Yan
821810657f
[Evals API][2/n] datasets / datasetio meta-reference implementation (#288)
* skeleton dataset / datasetio

* dataset datasetio

* config

* address comments

* delete dataset_utils

* address comments

* naming fix
2024-10-22 16:12:16 -07:00
Sarthak Deshpande
8a01b9e40c
Added implementations for get_agents_session, delete_agents_session and delete_agents (#267) 2024-10-22 13:50:43 -07:00
Suraj Subramanian
b81a3bd46a
Fix import conflict for SamplingParams (#285)
Conflict between llama_models.llama3.api.datatypes.SamplingParams and vllm.sampling_params.SamplingParams results in errors while processing VLLM engine requests
2024-10-22 12:56:00 -07:00
Ashwin Bharambe
c06718fbd5
Add support for Structured Output / Guided decoding (#281)
Added support for structured output in the API and added a reference implementation for meta-reference.

A few notes:

* Two formats are specified in the API: Json schema and EBNF based grammar
* Implementation only supports Json for now
We use lm-format-enhancer to provide the implementation right now but may change this especially because BNF grammars aren't supported by that library.
Fireworks has support for structured output and Together has limited supported for it too. Subsequent PRs will add these changes. We would like all our inference providers to provide structured output for llama models since it is an extremely important and highly sought-after need by the developers.
2024-10-22 12:53:34 -07:00
Anush
4c3d33e6f4
feat: Qdrant Vector index support (#221)
This PR adds support for Qdrant - https://qdrant.tech/ to be used as a vector memory.

I've unit-tested the methods to confirm that they work as intended.

To run Qdrant

```
docker run -p 6333:6333 qdrant/qdrant
```
2024-10-22 12:50:19 -07:00
Xi Yan
e45f121c77
[Evals API] [1/n] Initial API (#287)
* type system api

* datasets api

* fix

* datasetio api

* kill reward scoring

* scoring functions + evals

* move jobs, fix errors
2024-10-22 09:31:19 -07:00
Dinesh Yeduguru
1d241bf3fe
add completion() for ollama (#280) 2024-10-21 22:26:33 -07:00
Xi Yan
a2ff74a686 telemetry WARNING->WARN fix 2024-10-21 18:52:48 -07:00
Xi Yan
4d2bd2d39e
add more distro templates (#279)
* verify dockers

* together distro verified

* readme

* fireworks distro

* fireworks compose up

* fireworks verified
2024-10-21 18:15:08 -07:00
Xi Yan
cf27d19dd5 fix sse_generator async 2024-10-21 14:03:42 -07:00
Xi Yan
af75618348 remove distribution/templates 2024-10-21 13:23:58 -07:00
Xi Yan
23210e8679
llama stack distributions / templates / docker refactor (#266)
* docker compose ollama

* comment

* update compose file

* readme for distributions

* readme

* move distribution folders

* move distribution/templates to distributions/

* rename

* kill distribution/templates

* readme

* readme

* build/developer cookbook/new api provider

* developer cookbook

* readme

* readme

* [bugfix] fix case for agent when memory bank registered without specifying provider_id (#264)

* fix case where memory bank is registered without provider_id

* memory test

* agents unit test

* Add an option to not use elastic agents for meta-reference inference (#269)

* Allow overridding checkpoint_dir via config

* Small rename

* Make all methods `async def` again; add completion() for meta-reference (#270)

PR #201 had made several changes while trying to fix issues with getting the stream=False branches of inference and agents API working. As part of this, it made a change which was slightly gratuitous. Namely, making chat_completion() and brethren "def" instead of "async def".

The rationale was that this allowed the user (within llama-stack) of this to use it as:

```
async for chunk in api.chat_completion(params)
```

However, it causes unnecessary confusion for several folks. Given that clients (e.g., llama-stack-apps) anyway use the SDK methods (which are completely isolated) this choice was not ideal. Let's revert back so the call now looks like:

```
async for chunk in await api.chat_completion(params)
```

Bonus: Added a completion() implementation for the meta-reference provider. Technically should have been another PR :)

* Improve an important error message

* update ollama for llama-guard3

* Add vLLM inference provider for OpenAI compatible vLLM server (#178)

This PR adds vLLM inference provider for OpenAI compatible vLLM server.

* Create .readthedocs.yaml

Trying out readthedocs

* Update event_logger.py (#275)

spelling error

* vllm

* build templates

* delete templates

* tmp add back build to avoid merge conflicts

* vllm

* vllm

---------

Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com>
Co-authored-by: Ashwin Bharambe <ashwin@meta.com>
Co-authored-by: Yuan Tang <terrytangyuan@gmail.com>
Co-authored-by: raghotham <rsm@meta.com>
Co-authored-by: nehal-a2z <nehal@coderabbit.ai>
2024-10-21 11:17:53 -07:00
nehal-a2z
c995219731
Update event_logger.py (#275)
spelling error
2024-10-21 10:46:53 -07:00
Yuan Tang
a27a2cd2af
Add vLLM inference provider for OpenAI compatible vLLM server (#178)
This PR adds vLLM inference provider for OpenAI compatible vLLM server.
2024-10-20 18:43:25 -07:00
Ashwin Bharambe
59c43736e8 update ollama for llama-guard3 2024-10-19 17:26:18 -07:00
Ashwin Bharambe
8cfbb9d38b Improve an important error message 2024-10-19 17:19:54 -07:00
Ashwin Bharambe
2089427d60
Make all methods async def again; add completion() for meta-reference (#270)
PR #201 had made several changes while trying to fix issues with getting the stream=False branches of inference and agents API working. As part of this, it made a change which was slightly gratuitous. Namely, making chat_completion() and brethren "def" instead of "async def".

The rationale was that this allowed the user (within llama-stack) of this to use it as:

```
async for chunk in api.chat_completion(params)
```

However, it causes unnecessary confusion for several folks. Given that clients (e.g., llama-stack-apps) anyway use the SDK methods (which are completely isolated) this choice was not ideal. Let's revert back so the call now looks like:

```
async for chunk in await api.chat_completion(params)
```

Bonus: Added a completion() implementation for the meta-reference provider. Technically should have been another PR :)
2024-10-18 20:50:59 -07:00
Ashwin Bharambe
95a96afe34 Small rename 2024-10-18 14:41:38 -07:00
Ashwin Bharambe
71a905e93f Allow overridding checkpoint_dir via config 2024-10-18 14:28:06 -07:00
Ashwin Bharambe
33afd34e6f
Add an option to not use elastic agents for meta-reference inference (#269) 2024-10-18 12:51:10 -07:00
Xi Yan
be3c5c034d
[bugfix] fix case for agent when memory bank registered without specifying provider_id (#264)
* fix case where memory bank is registered without provider_id

* memory test

* agents unit test
2024-10-17 17:28:17 -07:00
Ashwin Bharambe
9fcf5d58e0 Allow overriding MODEL_IDS for inference test 2024-10-17 10:03:27 -07:00
Xi Yan
d787d1e84f
config templates restructure, docs (#262)
* wip

* config templates

* readmes
2024-10-16 23:25:10 -07:00
Tam
a07dfffbbf
initial changes (#261)
Update the parsing logic for comma-separated list and download function
2024-10-16 23:15:59 -07:00
Xi Yan
c4d5d6bb91
Docker compose scripts for remote adapters (#241)
* tgi docker compose

* path

* wait for tgi server to start before starting server

* update provider-id

* move scripts to distribution/ folder

* add readme

* readme
2024-10-15 16:32:53 -07:00
Ashwin Bharambe
09b793c4d6 Fix fp8 implementation which had bit-rotten a bit
I only tested with "on-the-fly" bf16 -> fp8 conversion, not the "load
from fp8" codepath.

YAML I tested with:

```
providers:
  - provider_id: quantized
    provider_type: meta-reference-quantized
    config:
      model: Llama3.1-8B-Instruct
      quantization:
        type: fp8
```
2024-10-15 13:57:01 -07:00