llama-stack/llama_stack
Aidan Do 21fb92d7cf
Add 3.3 70B to Ollama inference provider (#681)
# What does this PR do?

Adds 3.3 70B support to Ollama inference provider

## Test Plan

<details>
<summary>Manual</summary>

```bash
# 42GB to download
ollama pull llama3.3:70b

ollama run llama3.3:70b --keepalive 60m

export LLAMA_STACK_PORT=5000
pip install -e . \
  && llama stack build --template ollama --image-type conda \
  && llama stack run ./distributions/ollama/run.yaml \
  --port $LLAMA_STACK_PORT \
  --env INFERENCE_MODEL=Llama3.3-70B-Instruct \
  --env OLLAMA_URL=http://localhost:11434

export LLAMA_STACK_PORT=5000
llama-stack-client --endpoint http://localhost:$LLAMA_STACK_PORT \
  inference chat-completion \
  --model-id Llama3.3-70B-Instruct \
  --message "hello, what model are you?"
```

<img width="1221" alt="image"
src="https://github.com/user-attachments/assets/dcffbdd9-94c8-4d47-9f95-4ef6c3756294"
/>

</details>

## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [x] Ran pre-commit to handle lint / formatting issues.
- [x] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2024-12-25 22:15:58 -08:00
..
apis Tools API with brave and MCP providers (#639) 2024-12-19 21:25:17 -08:00
cli Add missing venv option in --image-type (#677) 2024-12-21 21:10:13 -08:00
distribution Tools API with brave and MCP providers (#639) 2024-12-19 21:25:17 -08:00
providers Add 3.3 70B to Ollama inference provider (#681) 2024-12-25 22:15:58 -08:00
scripts Fix to conda env build script 2024-12-17 12:19:34 -08:00
templates [torchtune integration] post training + eval (#670) 2024-12-20 13:43:13 -08:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00