Commit graph

684 commits

Author SHA1 Message Date
Ashwin Bharambe
1b2b32f959 Minor updates to docs 2024-11-22 17:44:05 -08:00
Ashwin Bharambe
6229562760 Organize references 2024-11-22 16:46:45 -08:00
Ashwin Bharambe
6fbf526d5c Move gitignore from docs/ to the main gitignore 2024-11-22 15:55:34 -08:00
Ashwin Bharambe
526a8dcfe0 Minor edit to zero_to_hero_guide 2024-11-22 15:52:56 -08:00
Ashwin Bharambe
0bd774716c Kill pancakes logo 2024-11-22 15:51:11 -08:00
Ashwin Bharambe
5acb15d2bf Make quickstart.md -> README.md so it shows up as default 2024-11-22 15:50:25 -08:00
Justin Lee
9928405e2c
Docs improvement v3 (#433)
# What does this PR do?

- updated the notebooks to reflect past changes up to llama-stack 0.0.53
- updated readme to  provide accurate and up-to-date info
- improve the current zero to hero by integrating an example using
together api


## Before submitting

- [x] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [x] Ran pre-commit to handle lint / formatting issues.
- [x] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.

---------

Co-authored-by: Sanyam Bhutani <sanyambhutani@meta.com>
2024-11-22 15:43:31 -08:00
Connor Hack
071710426d Try shortening test formula 2024-11-22 15:42:55 -08:00
Ashwin Bharambe
97dc5b68e5 model -> model_id for TGI 2024-11-22 15:40:08 -08:00
Connor Hack
8f60a3a55d Clean up job names 2024-11-22 15:07:08 -08:00
Ashwin Bharambe
c2c53d0272 More doc cleanup 2024-11-22 14:37:22 -08:00
Connor Hack
cbd69d06c3 Clean up checkpoint directory setting 2024-11-22 14:22:31 -08:00
Ashwin Bharambe
900b0556e7 Much more documentation work, things are getting a bit consumable right now 2024-11-22 14:06:18 -08:00
Ashwin Bharambe
98e213e96c More docs work 2024-11-22 14:06:18 -08:00
Ashwin Bharambe
eb2063bc3d Updates to the main doc page 2024-11-22 14:06:18 -08:00
dltn
eaf4fbef75 another print -> log fix 2024-11-22 13:35:34 -08:00
dltn
302a0145e5 we do want prints in print_pip_install_help 2024-11-22 13:32:54 -08:00
Dalton Flanagan
b007b062f3
Fix llama stack build in 0.0.54 (#505)
# What does this PR do?

Safety provider `inline::meta-reference` is now deprecated. However, we 

* aren't checking / printing the deprecation message in `llama stack
build`
* make the deprecated (unusable) provider

So I (1) added checking and (2) made `inline::llama-guard` the default

## Test Plan

Before

```
Traceback (most recent call last):
  File "/home/dalton/.conda/envs/nov22/bin/llama", line 8, in <module>
    sys.exit(main())
  File "/home/dalton/all/llama-stack/llama_stack/cli/llama.py", line 46, in main
    parser.run(args)
  File "/home/dalton/all/llama-stack/llama_stack/cli/llama.py", line 40, in run
    args.func(args)
  File "/home/dalton/all/llama-stack/llama_stack/cli/stack/build.py", line 177, in _run_stack_build_command
    self._run_stack_build_command_from_build_config(build_config)
  File "/home/dalton/all/llama-stack/llama_stack/cli/stack/build.py", line 305, in _run_stack_build_command_from_build_config
    self._generate_run_config(build_config, build_dir)
  File "/home/dalton/all/llama-stack/llama_stack/cli/stack/build.py", line 226, in _generate_run_config
    config_type = instantiate_class_type(
  File "/home/dalton/all/llama-stack/llama_stack/distribution/utils/dynamic.py", line 12, in instantiate_class_type
    module = importlib.import_module(module_name)
  File "/home/dalton/.conda/envs/nov22/lib/python3.10/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1004, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'llama_stack.providers.inline.safety.meta_reference'
```

After

```
Traceback (most recent call last):
  File "/home/dalton/.conda/envs/nov22/bin/llama", line 8, in <module>
    sys.exit(main())
  File "/home/dalton/all/llama-stack/llama_stack/cli/llama.py", line 46, in main
    parser.run(args)
  File "/home/dalton/all/llama-stack/llama_stack/cli/llama.py", line 40, in run
    args.func(args)
  File "/home/dalton/all/llama-stack/llama_stack/cli/stack/build.py", line 177, in _run_stack_build_command
    self._run_stack_build_command_from_build_config(build_config)
  File "/home/dalton/all/llama-stack/llama_stack/cli/stack/build.py", line 309, in _run_stack_build_command_from_build_config
    self._generate_run_config(build_config, build_dir)
  File "/home/dalton/all/llama-stack/llama_stack/cli/stack/build.py", line 228, in _generate_run_config
    raise InvalidProviderError(p.deprecation_error)
llama_stack.distribution.resolver.InvalidProviderError: 
Provider `inline::meta-reference` for API `safety` does not work with the latest Llama Stack.
- if you are using Llama Guard v3, please use the `inline::llama-guard` provider instead.
- if you are using Prompt Guard, please use the `inline::prompt-guard` provider instead.
- if you are using Code Scanner, please use the `inline::code-scanner` provider instead.
```

<img width="469" alt="Screenshot 2024-11-22 at 4 10 24 PM"
src="https://github.com/user-attachments/assets/8c2e09fe-379a-4504-b246-7925f80a6ed6">

## Sources

Please link relevant resources if necessary.


## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [x] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2024-11-22 16:23:44 -05:00
Connor Hack
d1d8f859e6 Update checkpointd directory setting 2024-11-22 12:51:34 -08:00
Connor Hack
7f5e0dd3db Refactor test run to support shorthand model names 2024-11-22 12:30:13 -08:00
Connor Hack
9c07e0189a Fix syntax error 2024-11-22 11:16:17 -08:00
Connor Hack
0e9ed3688d Remove unnecessary env vars 2024-11-22 10:58:17 -08:00
Connor Hack
1481a67365 Test new provider name 2024-11-22 10:22:12 -08:00
Connor Hack
377896a4c5 Remove testing llama-stack RC 2024-11-22 09:46:14 -08:00
Connor Hack
143e91f23d Add manual provider back for testing 2024-11-22 09:18:29 -08:00
Connor Hack
25e23a1dfe Add debug statement for PROVIDER_ID 2024-11-22 08:56:53 -08:00
Connor Hack
496879795e Dynamically change provider in tests 2024-11-22 07:22:04 -08:00
Chacksu
4136accf48
Merge branch 'meta-llama:main' into main 2024-11-21 19:49:53 -05:00
Connor Hack
046eec9793 Remove testing llama-stack RC 2024-11-21 16:35:00 -08:00
Ashwin Bharambe
2137b0af40 Bump version to 0.0.54 2024-11-21 16:28:30 -08:00
Ashwin Bharambe
c1025ebfdb Delete some dead code 2024-11-21 15:20:06 -08:00
Ashwin Bharambe
a0a00f1345 Update telemetry to have TEXT be the default log format 2024-11-21 15:18:45 -08:00
Connor Hack
318c98807c Pre-emptively test llama stack RC 2024-11-21 15:15:43 -08:00
Chacksu
94bfd9a1d1
Merge branch 'meta-llama:main' into main 2024-11-21 18:07:53 -05:00
Xi Yan
945db5dac2 fix logging 2024-11-21 15:02:57 -08:00
Ashwin Bharambe
d790be28b3 Don't skip meta-reference for the tests 2024-11-21 13:29:53 -08:00
Ashwin Bharambe
55c55b9f51 Update Quick Start significantly 2024-11-21 13:20:55 -08:00
Chacksu
19bc7e8942
Merge branch 'meta-llama:main' into main 2024-11-21 15:47:54 -05:00
Xi Yan
654722da7d fix model id for llm_as_judge_405b 2024-11-21 11:34:49 -08:00
Dinesh Yeduguru
6395dadc2b
use logging instead of prints (#499)
# What does this PR do?

This PR moves all print statements to use logging. Things changed:
- Had to add `await start_trace("sse_generator")` to server.py to
actually get tracing working. else was not seeing any logs
- If no telemetry provider is provided in the run.yaml, we will write to
stdout
- by default, the logs are going to be in JSON, but we expose an option
to configure to output in a human readable way.
2024-11-21 11:32:53 -08:00
liyunlu0618
4e1105e563
Fix fp8 quantization script. (#500)
# What does this PR do?

Fix fp8 quantization script.

## Test Plan

```
sh run_quantize_checkpoint.sh localhost fp8 /home/yll/fp8_test/ /home/yll/fp8_test/quantized_2 /home/yll/fp8_test/tokenizer.model 1 1
```

## Sources

Please link relevant resources if necessary.


## Before submitting

- [x] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [x] Ran pre-commit to handle lint / formatting issues.
- [x] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [x] Updated relevant documentation.
- [x] Wrote necessary unit or integration tests.

Co-authored-by: Yunlu Li <yll@meta.com>
2024-11-21 09:15:28 -08:00
Chacksu
09302347d3
Merge branch 'meta-llama:main' into main 2024-11-21 10:21:49 -05:00
Ashwin Bharambe
cf079a22a0 Plurals 2024-11-20 23:24:59 -08:00
Ashwin Bharambe
cd6ccb664c Integrate distro docs into the restructured docs 2024-11-20 23:20:05 -08:00
Ashwin Bharambe
2411a44833 Update more distribution docs to be simpler and partially codegen'ed 2024-11-20 22:03:44 -08:00
Connor Hack
490c5fb730 Undo None check and temporarily move if model check before builder 2024-11-20 19:17:44 -08:00
Connor Hack
16ffe19a20 Account for if a permitted model is None 2024-11-20 18:48:59 -08:00
Chacksu
05f1041bfa
Merge branch 'meta-llama:main' into main 2024-11-20 19:21:20 -05:00
Ashwin Bharambe
e84d4436b5
Since we are pushing for HF repos, we should accept them in inference configs (#497)
# What does this PR do?

As the title says. 

## Test Plan

This needs
8752149f58
to also land. So the next package (0.0.54) will make this work properly.

The test is:

```bash
pytest -v -s -m "llama_3b and meta_reference" test_model_registration.py
```
2024-11-20 16:14:37 -08:00
Dinesh Yeduguru
b3f9e8b2f2
Restructure docs (#494)
Rendered docs at: https://llama-stack.readthedocs.io/en/doc-simplify/
2024-11-20 15:54:47 -08:00