llama-stack/llama_stack
Aidan Do 17fdb47e5e
Add Llama 70B 3.3 to fireworks (#654)
# What does this PR do?

- Makes Llama 70B 3.3 available for fireworks

## Test Plan

```shell
pip install -e . \
&& llama stack build --config distributions/fireworks/build.yaml --image-type conda \
&& llama stack run distributions/fireworks/run.yaml \
  --port 5000
```

```python
        response = client.inference.chat_completion(
            model_id="Llama3.3-70B-Instruct",
            messages=[
                {"role": "user", "content": "hello world"},
            ],
        )
```

## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [x] Ran pre-commit to handle lint / formatting issues.
- [x] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2024-12-19 17:32:49 -08:00
..
apis Register Message and ResponseFormat 2024-12-18 10:32:25 -08:00
cli doc: llama-stack build --config help text references old directory (#596) 2024-12-10 17:42:02 -08:00
distribution fix trace starting in library client (#655) 2024-12-19 16:13:52 -08:00
providers Add Llama 70B 3.3 to fireworks (#654) 2024-12-19 17:32:49 -08:00
scripts Fix to conda env build script 2024-12-17 12:19:34 -08:00
templates Add Llama 70B 3.3 to fireworks (#654) 2024-12-19 17:32:49 -08:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00