Commit graph

634 commits

Author SHA1 Message Date
Anush
4c3d33e6f4
feat: Qdrant Vector index support (#221)
This PR adds support for Qdrant - https://qdrant.tech/ to be used as a vector memory.

I've unit-tested the methods to confirm that they work as intended.

To run Qdrant

```
docker run -p 6333:6333 qdrant/qdrant
```
2024-10-22 12:50:19 -07:00
Suraj Subramanian
668a495aba
Add REST api example for chat_completion (#286) 2024-10-22 10:35:20 -07:00
Xi Yan
e45f121c77
[Evals API] [1/n] Initial API (#287)
* type system api

* datasets api

* fix

* datasetio api

* kill reward scoring

* scoring functions + evals

* move jobs, fix errors
2024-10-22 09:31:19 -07:00
Xi Yan
b279d3bc58
Update README.md 2024-10-22 08:01:33 -07:00
Dinesh Yeduguru
1d241bf3fe
add completion() for ollama (#280) 2024-10-21 22:26:33 -07:00
raghotham
e2a5a2e10d
first version of readthedocs (#278) 2024-10-22 10:15:58 +05:30
Xi Yan
dbb5ce43fc Bump version to 0.0.43 2024-10-21 19:10:01 -07:00
Xi Yan
a2ff74a686 telemetry WARNING->WARN fix 2024-10-21 18:52:48 -07:00
Xi Yan
b1451afbc8
Update README.md 2024-10-21 18:21:30 -07:00
Xi Yan
4d2bd2d39e
add more distro templates (#279)
* verify dockers

* together distro verified

* readme

* fireworks distro

* fireworks compose up

* fireworks verified
2024-10-21 18:15:08 -07:00
Xi Yan
cf27d19dd5 fix sse_generator async 2024-10-21 14:03:42 -07:00
Ashwin Bharambe
1944405dca
Update new_api_provider.md 2024-10-21 14:02:51 -07:00
Ashwin Bharambe
606c48309e Small updates to encourage integration testing 2024-10-21 13:52:33 -07:00
Xi Yan
cb203b14b4 update README.md 2024-10-21 13:51:39 -07:00
Xi Yan
3a7884345a
Update new_api_provider.md 2024-10-21 13:41:56 -07:00
Xi Yan
25b37c9ff7
Update new_api_provider.md 2024-10-21 13:41:46 -07:00
Xi Yan
af75618348 remove distribution/templates 2024-10-21 13:23:58 -07:00
Xi Yan
23210e8679
llama stack distributions / templates / docker refactor (#266)
* docker compose ollama

* comment

* update compose file

* readme for distributions

* readme

* move distribution folders

* move distribution/templates to distributions/

* rename

* kill distribution/templates

* readme

* readme

* build/developer cookbook/new api provider

* developer cookbook

* readme

* readme

* [bugfix] fix case for agent when memory bank registered without specifying provider_id (#264)

* fix case where memory bank is registered without provider_id

* memory test

* agents unit test

* Add an option to not use elastic agents for meta-reference inference (#269)

* Allow overridding checkpoint_dir via config

* Small rename

* Make all methods `async def` again; add completion() for meta-reference (#270)

PR #201 had made several changes while trying to fix issues with getting the stream=False branches of inference and agents API working. As part of this, it made a change which was slightly gratuitous. Namely, making chat_completion() and brethren "def" instead of "async def".

The rationale was that this allowed the user (within llama-stack) of this to use it as:

```
async for chunk in api.chat_completion(params)
```

However, it causes unnecessary confusion for several folks. Given that clients (e.g., llama-stack-apps) anyway use the SDK methods (which are completely isolated) this choice was not ideal. Let's revert back so the call now looks like:

```
async for chunk in await api.chat_completion(params)
```

Bonus: Added a completion() implementation for the meta-reference provider. Technically should have been another PR :)

* Improve an important error message

* update ollama for llama-guard3

* Add vLLM inference provider for OpenAI compatible vLLM server (#178)

This PR adds vLLM inference provider for OpenAI compatible vLLM server.

* Create .readthedocs.yaml

Trying out readthedocs

* Update event_logger.py (#275)

spelling error

* vllm

* build templates

* delete templates

* tmp add back build to avoid merge conflicts

* vllm

* vllm

---------

Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com>
Co-authored-by: Ashwin Bharambe <ashwin@meta.com>
Co-authored-by: Yuan Tang <terrytangyuan@gmail.com>
Co-authored-by: raghotham <rsm@meta.com>
Co-authored-by: nehal-a2z <nehal@coderabbit.ai>
2024-10-21 11:17:53 -07:00
nehal-a2z
c995219731
Update event_logger.py (#275)
spelling error
2024-10-21 10:46:53 -07:00
raghotham
cae5b0708b
Create .readthedocs.yaml
Trying out readthedocs
2024-10-21 11:48:19 +05:30
Yuan Tang
a27a2cd2af
Add vLLM inference provider for OpenAI compatible vLLM server (#178)
This PR adds vLLM inference provider for OpenAI compatible vLLM server.
2024-10-20 18:43:25 -07:00
Ashwin Bharambe
59c43736e8 update ollama for llama-guard3 2024-10-19 17:26:18 -07:00
Ashwin Bharambe
8cfbb9d38b Improve an important error message 2024-10-19 17:19:54 -07:00
Ashwin Bharambe
2089427d60
Make all methods async def again; add completion() for meta-reference (#270)
PR #201 had made several changes while trying to fix issues with getting the stream=False branches of inference and agents API working. As part of this, it made a change which was slightly gratuitous. Namely, making chat_completion() and brethren "def" instead of "async def".

The rationale was that this allowed the user (within llama-stack) of this to use it as:

```
async for chunk in api.chat_completion(params)
```

However, it causes unnecessary confusion for several folks. Given that clients (e.g., llama-stack-apps) anyway use the SDK methods (which are completely isolated) this choice was not ideal. Let's revert back so the call now looks like:

```
async for chunk in await api.chat_completion(params)
```

Bonus: Added a completion() implementation for the meta-reference provider. Technically should have been another PR :)
2024-10-18 20:50:59 -07:00
Ashwin Bharambe
95a96afe34 Small rename 2024-10-18 14:41:38 -07:00
Ashwin Bharambe
71a905e93f Allow overridding checkpoint_dir via config 2024-10-18 14:28:06 -07:00
Ashwin Bharambe
33afd34e6f
Add an option to not use elastic agents for meta-reference inference (#269) 2024-10-18 12:51:10 -07:00
Xi Yan
be3c5c034d
[bugfix] fix case for agent when memory bank registered without specifying provider_id (#264)
* fix case where memory bank is registered without provider_id

* memory test

* agents unit test
2024-10-17 17:28:17 -07:00
Ashwin Bharambe
9fcf5d58e0 Allow overriding MODEL_IDS for inference test 2024-10-17 10:03:27 -07:00
Xi Yan
02be26098a getting started 2024-10-16 23:56:21 -07:00
Xi Yan
cf9e5b76b2
Update getting_started.md 2024-10-16 23:52:29 -07:00
Xi Yan
7cc47da8f2
Update getting_started.md 2024-10-16 23:50:31 -07:00
Xi Yan
d787d1e84f
config templates restructure, docs (#262)
* wip

* config templates

* readmes
2024-10-16 23:25:10 -07:00
Tam
a07dfffbbf
initial changes (#261)
Update the parsing logic for comma-separated list and download function
2024-10-16 23:15:59 -07:00
ATH
319a6b5f83
Update getting_started.md (#260) 2024-10-16 18:05:36 -07:00
Xi Yan
c4d5d6bb91
Docker compose scripts for remote adapters (#241)
* tgi docker compose

* path

* wait for tgi server to start before starting server

* update provider-id

* move scripts to distribution/ folder

* add readme

* readme
2024-10-15 16:32:53 -07:00
Matthieu FRONTON
770647dede
Fix broken rendering in Google Colab (#247) 2024-10-15 15:41:49 -07:00
Ashwin Bharambe
09b793c4d6 Fix fp8 implementation which had bit-rotten a bit
I only tested with "on-the-fly" bf16 -> fp8 conversion, not the "load
from fp8" codepath.

YAML I tested with:

```
providers:
  - provider_id: quantized
    provider_type: meta-reference-quantized
    config:
      model: Llama3.1-8B-Instruct
      quantization:
        type: fp8
```
2024-10-15 13:57:01 -07:00
Yuan Tang
80ada04f76
Remove request arg from chat completion response processing (#240)
Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
2024-10-15 13:03:17 -07:00
Xi Yan
209cd3d35e Bump version to 0.0.42 2024-10-14 11:13:04 -07:00
Yuan Tang
a2b87ed0cb
Switch to pre-commit/action (#239) 2024-10-11 11:09:11 -07:00
Yuan Tang
05282d1234
Enable pre-commit on main branch (#237) 2024-10-11 10:03:59 -07:00
Yuan Tang
2128e61da2
Fix incorrect completion() signature for Databricks provider (#236) 2024-10-11 08:47:57 -07:00
Dalton Flanagan
9fbe8852aa
Add Swift Package Index badge 2024-10-10 23:39:25 -04:00
Xi Yan
ca29980c6b fix agents context retriever 2024-10-10 20:17:29 -07:00
Ashwin Bharambe
1ff0476002 Split off meta-reference-quantized provider 2024-10-10 16:03:19 -07:00
Xi Yan
7ff5800dea generate openapi 2024-10-10 15:30:34 -07:00
Dalton Flanagan
a3e65d58a9
Add logo 2024-10-10 15:04:21 -04:00
Russell Bryant
eba9d1ea14
ci: Run pre-commit checks in CI (#176)
Run the pre-commit checks in a github workflow to validate that a PR
or a direct push to the repo does not introduce new errors.
2024-10-10 11:21:59 -07:00
Ashwin Bharambe
89d24a07f0 Bump version to 0.0.41 2024-10-10 10:27:03 -07:00