Commit graph

65 commits

Author SHA1 Message Date
raghotham
70d557f793
Update LICENSE (#47)
* Update LICENSE

* Update LICENSE
2024-08-29 07:39:50 -07:00
Hassan El Mghari
f2e18826b6
Together AI basic integration (#43)
* working!

* accounting for eos
2024-08-28 16:07:13 -07:00
Ashwin Bharambe
a8b9541f19 Bump version to 0.0.10 2024-08-27 04:19:27 -07:00
raghotham
117b95b38c
Update RFC-0001-llama-stack.md
Added link to sequence diagram from agentic system
2024-08-26 20:56:09 -07:00
Ashwin Bharambe
870cd7bb8b Add blobfile for tiktoken 2024-08-26 14:50:53 -07:00
Yufei (Benny) Chen
40ca8e21bd
Fireworks basic integration (#39) 2024-08-25 08:05:52 -07:00
Ashwin Bharambe
f812648aca Bump version to 0.0.9 2024-08-24 09:45:01 -07:00
Ashwin Bharambe
c1a82ea8cd Add a script for install a pip wheel from a presigned url 2024-08-23 12:18:51 -07:00
varunfb
9777639a1c
Updated URLs and addressed feedback (#37) 2024-08-22 13:34:46 -07:00
varunfb
4930616ec7
Updated cli instructions with additonal details for each subcommands (#36) 2024-08-22 12:20:47 -07:00
sisminnmaw
49f2bbbaeb
fixed bug in download not enough disk space condition (#35)
bug:
used undeclared variable in download.py.
when the disk space not enough NameError occured.
2024-08-22 08:10:47 -07:00
Jeff Tang
b4af8c0e00
update cli ref doc: llama model template names related; separation of copy-and-pastable commands with their outputs (#34) 2024-08-21 20:41:30 -07:00
Ashwin Bharambe
863bb915e1 Remove quantization_config from the APIs for now 2024-08-21 14:17:50 -07:00
Ashwin Bharambe
ab0a24f333
Add API keys to AgenticSystemConfig instead of relying on dotenv (#33) 2024-08-21 12:35:59 -07:00
Ashwin Bharambe
face3ceff1 suppress warning in CLI 2024-08-21 12:25:39 -07:00
Dalton Flanagan
270b5502d7 broaden URL match in download for older model families 2024-08-21 12:11:11 -04:00
raghotham
2232bfa8b5
RFC-0001-The-Llama-Stack (#8)
* RFC-0001-The-Llama-Stack

* Add OpenAPI generation utility, update SPEC to reflect latest types

* First cut at an observability API

* llama3_1 -> llama3

---------

Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com>
2024-08-20 19:01:18 -07:00
Ashwin Bharambe
57881c08c1 Bump version to 0.0.8 2024-08-19 20:12:01 -07:00
Ashwin Bharambe
e08e963f86 Add --manifest-file option to argparser 2024-08-19 18:26:56 -07:00
Ashwin Bharambe
b3da6b8afb Bump version to 0.0.7 2024-08-19 16:27:36 -07:00
Ashwin Bharambe
23de941424 Bump version to 0.0.6 2024-08-19 14:12:18 -07:00
Ashwin Bharambe
38244c3161 llama_models.llama3_1 -> llama_models.llama3 2024-08-19 10:55:37 -07:00
dltn
f502716cf7 Fix ShieldType Union equality bug 2024-08-18 19:13:15 -07:00
Ashwin Bharambe
5e072d0780 Add a --manifest-file option to llama download 2024-08-17 10:08:42 -07:00
Hardik Shah
b8fc4d4dee
Updates to prompt for tool calls (#29)
* update system prompts to drop new line

* Add tool prompt formats

* support json format

* JSON in caps

* function_tag system prompt is also added as a user message

* added docstrings for ToolPromptFormat

---------

Co-authored-by: Hardik Shah <hjshah@fb.com>
2024-08-15 13:23:51 -07:00
Ashwin Bharambe
0d933ac4c5 No need for unnecessary $(conda run ...) to get python interpreter 2024-08-14 20:48:35 -07:00
Ashwin Bharambe
00f0e6d92b
Avoid using nearly double the memory needed (#30) 2024-08-14 17:44:36 -07:00
Dalton Flanagan
b311dcd143 formatting 2024-08-14 17:03:43 -04:00
Ashwin Bharambe
069d877210 Typo bugfix (rename variable x -> prompt)
See https://github.com/meta-llama/llama-stack/issues/16 for the report
2024-08-14 13:47:27 -07:00
Dalton Flanagan
b6ccaf1778 formatting 2024-08-14 14:22:25 -04:00
Hardik Shah
94dfa293a6 Bump version to 0.0.5 2024-08-13 15:23:57 -07:00
dltn
432957d6b6 fix typo 2024-08-13 11:39:57 -07:00
Hardik Shah
7f13853e5e
Update README.md 2024-08-12 17:10:02 -07:00
Hardik Shah
37da47ef8e upgrade pydantic to latest 2024-08-12 15:14:21 -07:00
Ashwin Bharambe
2cd8b2ff5b Add simple validation for RemoteProviderConfig 2024-08-09 15:15:53 -07:00
dltn
898cd5b352 Bump version to 0.0.4 2024-08-08 15:24:45 -07:00
Dalton Flanagan
416097a9ea
Rename inline -> local (#24)
* Rename the "inline" distribution to "local"

* further rename

---------

Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com>
2024-08-08 17:39:03 -04:00
Ashwin Bharambe
dd15671f7f Bump version to 0.0.3 2024-08-08 13:40:03 -07:00
Ashwin Bharambe
e830814399
Introduce Llama stack distributions (#22)
* Add distribution CLI scaffolding

* More progress towards `llama distribution install`

* getting closer to a distro definition, distro install + configure works

* Distribution server now functioning

* read existing configuration, save enums properly

* Remove inference uvicorn server entrypoint and llama inference CLI command

* updated dependency and client model name

* Improved exception handling

* local imports for faster cli

* undo a typo, add a passthrough distribution

* implement full-passthrough in the server

* add safety adapters, configuration handling, server + clients

* cleanup, moving stuff to common, nuke utils

* Add a Path() wrapper at the earliest place

* fixes

* Bring agentic system api to toolchain

Add adapter dependencies and resolve adapters using a topological sort

* refactor to reduce size of `agentic_system`

* move straggler files and fix some important existing bugs

* ApiSurface -> Api

* refactor a method out

* Adapter -> Provider

* Make each inference provider into its own subdirectory

* installation fixes

* Rename Distribution -> DistributionSpec, simplify RemoteProviders

* dict key instead of attr

* update inference config to take model and not model_dir

* Fix passthrough streaming, send headers properly not part of body :facepalm

* update safety to use model sku ids and not model dirs

* Update cli_reference.md

* minor fixes

* add DistributionConfig, fix a bug in model download

* Make install + start scripts do proper configuration automatically

* Update CLI_reference

* Nuke fp8_requirements, fold fbgemm into common requirements

* Update README, add newline between API surface configurations

* Refactor download functionality out of the Command so can be reused

* Add `llama model download` alias for `llama download`

* Show message about checksum file so users can check themselves

* Simpler intro statements

* get ollama working

* Reduce a bunch of dependencies from toolchain

Some improvements to the distribution install script

* Avoid using `conda run` since it buffers everything

* update dependencies and rely on LLAMA_TOOLCHAIN_DIR for dev purposes

* add validation for configuration input

* resort imports

* make optional subclasses default to yes for configuration

* Remove additional_pip_packages; move deps to providers

* for inline make 8b model the default

* Add scripts to MANIFEST

* allow installing from test.pypi.org

* Fix #2 to help with testing packages

* Must install llama-models at that same version first

* fix PIP_ARGS

---------

Co-authored-by: Hardik Shah <hjshah@fb.com>
Co-authored-by: Hardik Shah <hjshah@meta.com>
2024-08-08 13:38:41 -07:00
Dalton Flanagan
da4645a27a
hide non-featured (older) models from model list command without show-all flag (#23) 2024-08-07 23:31:30 -04:00
Hardik Shah
7664d5701d update tests and formatting 2024-08-05 12:34:16 -07:00
Hardik Shah
d7a4cdd70d added options to ollama inference 2024-08-02 14:44:22 -07:00
Ashwin Bharambe
09cf3fe78b Use new definitions of Model / SKU 2024-07-31 22:44:35 -07:00
Hardik Shah
156bfa0e15
Added Ollama as an inference impl (#20)
* fix non-streaming api in inference server

* unit test for inline inference

* Added non-streaming ollama inference impl

* add streaming support for ollama inference with tests

* addressing comments

---------

Co-authored-by: Hardik Shah <hjshah@fb.com>
2024-07-31 22:08:37 -07:00
Ashwin Bharambe
c253c1c9ad Begin adding a /safety/run_shield API 2024-07-31 21:57:10 -07:00
Ashwin Bharambe
1bc81eae7b update toolchain to work with updated imports from llama_models 2024-07-30 17:52:57 -07:00
Ashwin Bharambe
23014ea4d1 Add hacks because Cloudfront config limits on the 405b model files 2024-07-30 13:46:47 -07:00
Ashwin Bharambe
404af06e02 Bump version to 0.0.2 2024-07-29 23:56:41 -07:00
Ashwin Bharambe
7306e6b167 show sampling params in model describe 2024-07-29 23:44:07 -07:00
Ashwin Bharambe
040c30ee54 added resumable downloader for downloading models 2024-07-29 23:29:16 -07:00