* Added hadamard transform for spinquant
* Changed from config to model_args
* Added an assertion for model args
* Use enum.value to check against str
* pre-commit
---------
Co-authored-by: Sachin Mehta <sacmehta@fb.com>
Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com>
Conflict between llama_models.llama3.api.datatypes.SamplingParams and vllm.sampling_params.SamplingParams results in errors while processing VLLM engine requests
Added support for structured output in the API and added a reference implementation for meta-reference.
A few notes:
* Two formats are specified in the API: Json schema and EBNF based grammar
* Implementation only supports Json for now
We use lm-format-enhancer to provide the implementation right now but may change this especially because BNF grammars aren't supported by that library.
Fireworks has support for structured output and Together has limited supported for it too. Subsequent PRs will add these changes. We would like all our inference providers to provide structured output for llama models since it is an extremely important and highly sought-after need by the developers.
This PR adds support for Qdrant - https://qdrant.tech/ to be used as a vector memory.
I've unit-tested the methods to confirm that they work as intended.
To run Qdrant
```
docker run -p 6333:6333 qdrant/qdrant
```
* docker compose ollama
* comment
* update compose file
* readme for distributions
* readme
* move distribution folders
* move distribution/templates to distributions/
* rename
* kill distribution/templates
* readme
* readme
* build/developer cookbook/new api provider
* developer cookbook
* readme
* readme
* [bugfix] fix case for agent when memory bank registered without specifying provider_id (#264)
* fix case where memory bank is registered without provider_id
* memory test
* agents unit test
* Add an option to not use elastic agents for meta-reference inference (#269)
* Allow overridding checkpoint_dir via config
* Small rename
* Make all methods `async def` again; add completion() for meta-reference (#270)
PR #201 had made several changes while trying to fix issues with getting the stream=False branches of inference and agents API working. As part of this, it made a change which was slightly gratuitous. Namely, making chat_completion() and brethren "def" instead of "async def".
The rationale was that this allowed the user (within llama-stack) of this to use it as:
```
async for chunk in api.chat_completion(params)
```
However, it causes unnecessary confusion for several folks. Given that clients (e.g., llama-stack-apps) anyway use the SDK methods (which are completely isolated) this choice was not ideal. Let's revert back so the call now looks like:
```
async for chunk in await api.chat_completion(params)
```
Bonus: Added a completion() implementation for the meta-reference provider. Technically should have been another PR :)
* Improve an important error message
* update ollama for llama-guard3
* Add vLLM inference provider for OpenAI compatible vLLM server (#178)
This PR adds vLLM inference provider for OpenAI compatible vLLM server.
* Create .readthedocs.yaml
Trying out readthedocs
* Update event_logger.py (#275)
spelling error
* vllm
* build templates
* delete templates
* tmp add back build to avoid merge conflicts
* vllm
* vllm
---------
Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com>
Co-authored-by: Ashwin Bharambe <ashwin@meta.com>
Co-authored-by: Yuan Tang <terrytangyuan@gmail.com>
Co-authored-by: raghotham <rsm@meta.com>
Co-authored-by: nehal-a2z <nehal@coderabbit.ai>
PR #201 had made several changes while trying to fix issues with getting the stream=False branches of inference and agents API working. As part of this, it made a change which was slightly gratuitous. Namely, making chat_completion() and brethren "def" instead of "async def".
The rationale was that this allowed the user (within llama-stack) of this to use it as:
```
async for chunk in api.chat_completion(params)
```
However, it causes unnecessary confusion for several folks. Given that clients (e.g., llama-stack-apps) anyway use the SDK methods (which are completely isolated) this choice was not ideal. Let's revert back so the call now looks like:
```
async for chunk in await api.chat_completion(params)
```
Bonus: Added a completion() implementation for the meta-reference provider. Technically should have been another PR :)
* tgi docker compose
* path
* wait for tgi server to start before starting server
* update provider-id
* move scripts to distribution/ folder
* add readme
* readme
I only tested with "on-the-fly" bf16 -> fp8 conversion, not the "load
from fp8" codepath.
YAML I tested with:
```
providers:
- provider_id: quantized
provider_type: meta-reference-quantized
config:
model: Llama3.1-8B-Instruct
quantization:
type: fp8
```