Commit graph

2123 commits

Author SHA1 Message Date
Ashwin Bharambe
0167953d2d Update OpenAPI generator for POST requests 2024-09-04 09:27:00 -07:00
Ashwin Bharambe
01d971bda6 Bump version to 0.0.12 2024-09-03 23:24:02 -07:00
Ashwin Bharambe
1380d78c19 Fixes to the llama stack configure script + inference adapters 2024-09-03 23:22:21 -07:00
Ashwin Bharambe
4869f2b983 Update fireworks and together entries as adapters 2024-09-03 22:56:52 -07:00
Ashwin Bharambe
f802d481d9 Bump version to 0.0.11 2024-09-03 22:41:29 -07:00
Ashwin Bharambe
7bc7785b0d
API Updates: fleshing out RAG APIs, introduce "llama stack" CLI command (#51)
* add tools to chat completion request

* use templates for generating system prompts

* Moved ToolPromptFormat and jinja templates to llama_models.llama3.api

* <WIP> memory changes

- inlined AgenticSystemInstanceConfig so API feels more ergonomic
- renamed it to AgentConfig, AgentInstance -> Agent
- added a MemoryConfig and `memory` parameter
- added `attachments` to input and `output_attachments` to the response

- some naming changes

* InterleavedTextAttachment -> InterleavedTextMedia, introduce memory tool

* flesh out memory banks API

* agentic loop has a RAG implementation

* faiss provider implementation

* memory client works

* re-work tool definitions, fix FastAPI issues, fix tool regressions

* fix agentic_system utils

* basic RAG seems to work

* small bug fixes for inline attachments

* Refactor custom tool execution utilities

* Bug fix, show memory retrieval steps in EventLogger

* No need for api_key for Remote providers

* add special unicode character ↵ to showcase newlines in model prompt templates

* remove api.endpoints imports

* combine datatypes.py and endpoints.py into api.py

* Attachment / add TTL api

* split batch_inference from inference

* minor import fixes

* use a single impl for ChatFormat.decode_assistant_mesage

* use interleaved_text_media_as_str() utilityt

* Fix api.datatypes imports

* Add blobfile for tiktoken

* Add ToolPromptFormat to ChatFormat.encode_message so that tools are encoded properly

* templates take optional --format={json,function_tag}

* Rag Updates

* Add `api build` subcommand -- WIP

* fix

* build + run image seems to work

* <WIP> adapters

* bunch more work to make adapters work

* api build works for conda now

* ollama remote adapter works

* Several smaller fixes to make adapters work

Also, reorganized the pattern of __init__ inside providers so
configuration can stay lightweight

* llama distribution -> llama stack + containers (WIP)

* All the new CLI for api + stack work

* Make Fireworks and Together into the Adapter format

* Some quick fixes to the CLI behavior to make it consistent

* Updated README phew

* Update cli_reference.md

* llama_toolchain/distribution -> llama_toolchain/core

* Add termcolor

* update paths

* Add a log just for consistency

* chmod +x scripts

* Fix api dependencies not getting added to configuration

* missing import lol

* Delete utils.py; move to agentic system

* Support downloading of URLs for attachments for code interpreter

* Simplify and generalize `llama api build` yay

* Update `llama stack configure` to be very simple also

* Fix stack start

* Allow building an "adhoc" distribution

* Remote `llama api []` subcommands

* Fixes to llama stack commands and update docs

* Update documentation again and add error messages to llama stack start

* llama stack start -> llama stack run

* Change name of build for less confusion

* Add pyopenapi fork to the repository, update RFC assets

* Remove conflicting annotation

* Added a "--raw" option for model template printing

---------

Co-authored-by: Hardik Shah <hjshah@fb.com>
Co-authored-by: Ashwin Bharambe <ashwin@meta.com>
Co-authored-by: Dalton Flanagan <6599399+dltn@users.noreply.github.com>
2024-09-03 22:39:39 -07:00
Dalton Flanagan
35093c0b6f
Add patch for SSE event endpoint responses (#50) 2024-09-03 23:40:31 -04:00
Dalton Flanagan
0af81776c7 fix for incomplete SSE type generation 2024-09-03 13:11:40 -04:00
raghotham
70d557f793
Update LICENSE (#47)
* Update LICENSE

* Update LICENSE
2024-08-29 07:39:50 -07:00
Hassan El Mghari
f2e18826b6
Together AI basic integration (#43)
* working!

* accounting for eos
2024-08-28 16:07:13 -07:00
Ashwin Bharambe
a8b9541f19 Bump version to 0.0.10 2024-08-27 04:19:27 -07:00
raghotham
117b95b38c
Update RFC-0001-llama-stack.md
Added link to sequence diagram from agentic system
2024-08-26 20:56:09 -07:00
Ashwin Bharambe
870cd7bb8b Add blobfile for tiktoken 2024-08-26 14:50:53 -07:00
Yufei (Benny) Chen
40ca8e21bd
Fireworks basic integration (#39) 2024-08-25 08:05:52 -07:00
Ashwin Bharambe
f812648aca Bump version to 0.0.9 2024-08-24 09:45:01 -07:00
Ashwin Bharambe
c1a82ea8cd Add a script for install a pip wheel from a presigned url 2024-08-23 12:18:51 -07:00
varunfb
9777639a1c
Updated URLs and addressed feedback (#37) 2024-08-22 13:34:46 -07:00
varunfb
4930616ec7
Updated cli instructions with additonal details for each subcommands (#36) 2024-08-22 12:20:47 -07:00
sisminnmaw
49f2bbbaeb
fixed bug in download not enough disk space condition (#35)
bug:
used undeclared variable in download.py.
when the disk space not enough NameError occured.
2024-08-22 08:10:47 -07:00
Jeff Tang
b4af8c0e00
update cli ref doc: llama model template names related; separation of copy-and-pastable commands with their outputs (#34) 2024-08-21 20:41:30 -07:00
Ashwin Bharambe
863bb915e1 Remove quantization_config from the APIs for now 2024-08-21 14:17:50 -07:00
Ashwin Bharambe
ab0a24f333
Add API keys to AgenticSystemConfig instead of relying on dotenv (#33) 2024-08-21 12:35:59 -07:00
Ashwin Bharambe
face3ceff1 suppress warning in CLI 2024-08-21 12:25:39 -07:00
Dalton Flanagan
270b5502d7 broaden URL match in download for older model families 2024-08-21 12:11:11 -04:00
raghotham
2232bfa8b5
RFC-0001-The-Llama-Stack (#8)
* RFC-0001-The-Llama-Stack

* Add OpenAPI generation utility, update SPEC to reflect latest types

* First cut at an observability API

* llama3_1 -> llama3

---------

Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com>
2024-08-20 19:01:18 -07:00
Ashwin Bharambe
57881c08c1 Bump version to 0.0.8 2024-08-19 20:12:01 -07:00
Ashwin Bharambe
e08e963f86 Add --manifest-file option to argparser 2024-08-19 18:26:56 -07:00
Ashwin Bharambe
b3da6b8afb Bump version to 0.0.7 2024-08-19 16:27:36 -07:00
Ashwin Bharambe
23de941424 Bump version to 0.0.6 2024-08-19 14:12:18 -07:00
Ashwin Bharambe
38244c3161 llama_models.llama3_1 -> llama_models.llama3 2024-08-19 10:55:37 -07:00
dltn
f502716cf7 Fix ShieldType Union equality bug 2024-08-18 19:13:15 -07:00
Ashwin Bharambe
5e072d0780 Add a --manifest-file option to llama download 2024-08-17 10:08:42 -07:00
Hardik Shah
b8fc4d4dee
Updates to prompt for tool calls (#29)
* update system prompts to drop new line

* Add tool prompt formats

* support json format

* JSON in caps

* function_tag system prompt is also added as a user message

* added docstrings for ToolPromptFormat

---------

Co-authored-by: Hardik Shah <hjshah@fb.com>
2024-08-15 13:23:51 -07:00
Ashwin Bharambe
0d933ac4c5 No need for unnecessary $(conda run ...) to get python interpreter 2024-08-14 20:48:35 -07:00
Ashwin Bharambe
00f0e6d92b
Avoid using nearly double the memory needed (#30) 2024-08-14 17:44:36 -07:00
Dalton Flanagan
b311dcd143 formatting 2024-08-14 17:03:43 -04:00
Ashwin Bharambe
069d877210 Typo bugfix (rename variable x -> prompt)
See https://github.com/meta-llama/llama-stack/issues/16 for the report
2024-08-14 13:47:27 -07:00
Dalton Flanagan
b6ccaf1778 formatting 2024-08-14 14:22:25 -04:00
Hardik Shah
94dfa293a6 Bump version to 0.0.5 2024-08-13 15:23:57 -07:00
dltn
432957d6b6 fix typo 2024-08-13 11:39:57 -07:00
Hardik Shah
7f13853e5e
Update README.md 2024-08-12 17:10:02 -07:00
Hardik Shah
37da47ef8e upgrade pydantic to latest 2024-08-12 15:14:21 -07:00
Ashwin Bharambe
2cd8b2ff5b Add simple validation for RemoteProviderConfig 2024-08-09 15:15:53 -07:00
dltn
898cd5b352 Bump version to 0.0.4 2024-08-08 15:24:45 -07:00
Dalton Flanagan
416097a9ea
Rename inline -> local (#24)
* Rename the "inline" distribution to "local"

* further rename

---------

Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com>
2024-08-08 17:39:03 -04:00
Ashwin Bharambe
dd15671f7f Bump version to 0.0.3 2024-08-08 13:40:03 -07:00
Ashwin Bharambe
e830814399
Introduce Llama stack distributions (#22)
* Add distribution CLI scaffolding

* More progress towards `llama distribution install`

* getting closer to a distro definition, distro install + configure works

* Distribution server now functioning

* read existing configuration, save enums properly

* Remove inference uvicorn server entrypoint and llama inference CLI command

* updated dependency and client model name

* Improved exception handling

* local imports for faster cli

* undo a typo, add a passthrough distribution

* implement full-passthrough in the server

* add safety adapters, configuration handling, server + clients

* cleanup, moving stuff to common, nuke utils

* Add a Path() wrapper at the earliest place

* fixes

* Bring agentic system api to toolchain

Add adapter dependencies and resolve adapters using a topological sort

* refactor to reduce size of `agentic_system`

* move straggler files and fix some important existing bugs

* ApiSurface -> Api

* refactor a method out

* Adapter -> Provider

* Make each inference provider into its own subdirectory

* installation fixes

* Rename Distribution -> DistributionSpec, simplify RemoteProviders

* dict key instead of attr

* update inference config to take model and not model_dir

* Fix passthrough streaming, send headers properly not part of body :facepalm

* update safety to use model sku ids and not model dirs

* Update cli_reference.md

* minor fixes

* add DistributionConfig, fix a bug in model download

* Make install + start scripts do proper configuration automatically

* Update CLI_reference

* Nuke fp8_requirements, fold fbgemm into common requirements

* Update README, add newline between API surface configurations

* Refactor download functionality out of the Command so can be reused

* Add `llama model download` alias for `llama download`

* Show message about checksum file so users can check themselves

* Simpler intro statements

* get ollama working

* Reduce a bunch of dependencies from toolchain

Some improvements to the distribution install script

* Avoid using `conda run` since it buffers everything

* update dependencies and rely on LLAMA_TOOLCHAIN_DIR for dev purposes

* add validation for configuration input

* resort imports

* make optional subclasses default to yes for configuration

* Remove additional_pip_packages; move deps to providers

* for inline make 8b model the default

* Add scripts to MANIFEST

* allow installing from test.pypi.org

* Fix #2 to help with testing packages

* Must install llama-models at that same version first

* fix PIP_ARGS

---------

Co-authored-by: Hardik Shah <hjshah@fb.com>
Co-authored-by: Hardik Shah <hjshah@meta.com>
2024-08-08 13:38:41 -07:00
Dalton Flanagan
da4645a27a
hide non-featured (older) models from model list command without show-all flag (#23) 2024-08-07 23:31:30 -04:00
Hardik Shah
7664d5701d update tests and formatting 2024-08-05 12:34:16 -07:00
Hardik Shah
d7a4cdd70d added options to ollama inference 2024-08-02 14:44:22 -07:00