Commit graph

47 commits

Author SHA1 Message Date
Ashwin Bharambe
fb3c4566ce llama stack start -> llama stack run 2024-09-03 11:23:26 -07:00
Ashwin Bharambe
fab6bd1728 Update documentation again and add error messages to llama stack start 2024-09-02 21:36:32 -07:00
Ashwin Bharambe
279565499b Fixes to llama stack commands and update docs 2024-09-02 18:58:54 -07:00
Ashwin Bharambe
5927f3c3c0 Remote llama api [] subcommands 2024-09-02 18:48:19 -07:00
Ashwin Bharambe
9be0edc76c Allow building an "adhoc" distribution 2024-09-02 18:37:31 -07:00
Ashwin Bharambe
d99c06fce8 Fix stack start 2024-08-30 15:03:23 -07:00
Ashwin Bharambe
5172d9a79d Update llama stack configure to be very simple also 2024-08-30 14:55:20 -07:00
Ashwin Bharambe
f8517e4688 Simplify and generalize llama api build yay 2024-08-30 14:51:40 -07:00
Ashwin Bharambe
6fa074168e update paths 2024-08-29 16:14:45 -07:00
Ashwin Bharambe
3cb67f1f58 llama_toolchain/distribution -> llama_toolchain/core 2024-08-28 17:43:08 -07:00
Ashwin Bharambe
896f057b76 Updated README phew 2024-08-28 17:34:23 -07:00
Ashwin Bharambe
3063329dad Some quick fixes to the CLI behavior to make it consistent 2024-08-28 17:17:46 -07:00
Ashwin Bharambe
d3965dd435 Merge remote-tracking branch 'origin/main' into api_updates_1 2024-08-28 16:02:34 -07:00
Ashwin Bharambe
197f768636 All the new CLI for api + stack work 2024-08-28 15:55:57 -07:00
Ashwin Bharambe
fd3b65b718 llama distribution -> llama stack + containers (WIP) 2024-08-28 15:55:21 -07:00
Ashwin Bharambe
45987996c4 Several smaller fixes to make adapters work
Also, reorganized the pattern of __init__ inside providers so
configuration can stay lightweight
2024-08-28 15:55:21 -07:00
Ashwin Bharambe
2a1552a5eb ollama remote adapter works 2024-08-28 15:55:21 -07:00
Ashwin Bharambe
2076d2b6db api build works for conda now 2024-08-28 15:55:21 -07:00
Ashwin Bharambe
c4fe72c3a3 bunch more work to make adapters work 2024-08-28 15:55:18 -07:00
Ashwin Bharambe
3a337c5f1c Add api build subcommand -- WIP 2024-08-28 15:54:31 -07:00
Hardik Shah
ea6d9ec937 templates take optional --format={json,function_tag} 2024-08-26 17:42:24 -07:00
Hardik Shah
df489261ac add special unicode character ↵ to showcase newlines in model prompt templates 2024-08-26 07:35:49 -07:00
Ashwin Bharambe
c1a82ea8cd Add a script for install a pip wheel from a presigned url 2024-08-23 12:18:51 -07:00
sisminnmaw
49f2bbbaeb
fixed bug in download not enough disk space condition (#35)
bug:
used undeclared variable in download.py.
when the disk space not enough NameError occured.
2024-08-22 08:10:47 -07:00
Ashwin Bharambe
face3ceff1 suppress warning in CLI 2024-08-21 12:25:39 -07:00
Dalton Flanagan
270b5502d7 broaden URL match in download for older model families 2024-08-21 12:11:11 -04:00
Ashwin Bharambe
e08e963f86 Add --manifest-file option to argparser 2024-08-19 18:26:56 -07:00
Ashwin Bharambe
38244c3161 llama_models.llama3_1 -> llama_models.llama3 2024-08-19 10:55:37 -07:00
Ashwin Bharambe
5e072d0780 Add a --manifest-file option to llama download 2024-08-17 10:08:42 -07:00
Dalton Flanagan
b311dcd143 formatting 2024-08-14 17:03:43 -04:00
Dalton Flanagan
b6ccaf1778 formatting 2024-08-14 14:22:25 -04:00
dltn
432957d6b6 fix typo 2024-08-13 11:39:57 -07:00
Dalton Flanagan
416097a9ea
Rename inline -> local (#24)
* Rename the "inline" distribution to "local"

* further rename

---------

Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com>
2024-08-08 17:39:03 -04:00
Ashwin Bharambe
e830814399
Introduce Llama stack distributions (#22)
* Add distribution CLI scaffolding

* More progress towards `llama distribution install`

* getting closer to a distro definition, distro install + configure works

* Distribution server now functioning

* read existing configuration, save enums properly

* Remove inference uvicorn server entrypoint and llama inference CLI command

* updated dependency and client model name

* Improved exception handling

* local imports for faster cli

* undo a typo, add a passthrough distribution

* implement full-passthrough in the server

* add safety adapters, configuration handling, server + clients

* cleanup, moving stuff to common, nuke utils

* Add a Path() wrapper at the earliest place

* fixes

* Bring agentic system api to toolchain

Add adapter dependencies and resolve adapters using a topological sort

* refactor to reduce size of `agentic_system`

* move straggler files and fix some important existing bugs

* ApiSurface -> Api

* refactor a method out

* Adapter -> Provider

* Make each inference provider into its own subdirectory

* installation fixes

* Rename Distribution -> DistributionSpec, simplify RemoteProviders

* dict key instead of attr

* update inference config to take model and not model_dir

* Fix passthrough streaming, send headers properly not part of body :facepalm

* update safety to use model sku ids and not model dirs

* Update cli_reference.md

* minor fixes

* add DistributionConfig, fix a bug in model download

* Make install + start scripts do proper configuration automatically

* Update CLI_reference

* Nuke fp8_requirements, fold fbgemm into common requirements

* Update README, add newline between API surface configurations

* Refactor download functionality out of the Command so can be reused

* Add `llama model download` alias for `llama download`

* Show message about checksum file so users can check themselves

* Simpler intro statements

* get ollama working

* Reduce a bunch of dependencies from toolchain

Some improvements to the distribution install script

* Avoid using `conda run` since it buffers everything

* update dependencies and rely on LLAMA_TOOLCHAIN_DIR for dev purposes

* add validation for configuration input

* resort imports

* make optional subclasses default to yes for configuration

* Remove additional_pip_packages; move deps to providers

* for inline make 8b model the default

* Add scripts to MANIFEST

* allow installing from test.pypi.org

* Fix #2 to help with testing packages

* Must install llama-models at that same version first

* fix PIP_ARGS

---------

Co-authored-by: Hardik Shah <hjshah@fb.com>
Co-authored-by: Hardik Shah <hjshah@meta.com>
2024-08-08 13:38:41 -07:00
Dalton Flanagan
da4645a27a
hide non-featured (older) models from model list command without show-all flag (#23) 2024-08-07 23:31:30 -04:00
Ashwin Bharambe
09cf3fe78b Use new definitions of Model / SKU 2024-07-31 22:44:35 -07:00
Ashwin Bharambe
1bc81eae7b update toolchain to work with updated imports from llama_models 2024-07-30 17:52:57 -07:00
Ashwin Bharambe
23014ea4d1 Add hacks because Cloudfront config limits on the 405b model files 2024-07-30 13:46:47 -07:00
Ashwin Bharambe
7306e6b167 show sampling params in model describe 2024-07-29 23:44:07 -07:00
Ashwin Bharambe
040c30ee54 added resumable downloader for downloading models 2024-07-29 23:29:16 -07:00
Ashwin Bharambe
59574924de model template --template -> model template --name 2024-07-29 18:21:05 -07:00
Ashwin Bharambe
45b8a7ffcd Add model describe subcommand 2024-07-29 18:19:53 -07:00
Ashwin Bharambe
9d7f283722 Add model list subcommand 2024-07-29 16:39:53 -07:00
Ashwin Bharambe
3583cf2d51 update model template output to be prettier, more consumable 2024-07-26 15:39:46 -07:00
Dalton Flanagan
ec433448f2
Add CLI reference docs (#14)
* Add CLI reference doc

* touchups

* add helptext for download
2024-07-25 13:56:29 -07:00
Lucain
378a2077dd
Update download command (#9) 2024-07-24 16:50:40 -07:00
Ashwin Bharambe
5d5acc8ed5 Initial commit 2024-07-23 08:32:33 -07:00