forked from phoenix-oss/llama-stack-mirror
Added link to the Colab notebook of the Llama Stack lesson on the Llama 3.2 course on DLAI (#445)
# What does this PR do? It shows a complete zero-setup Colab using the Llama Stack server implemented and powered by together.ai: using Llama Stack Client API to run inference, agent and 3.2 models. Good for a quick start guide. - [ ] Addresses issue (#issue) ## Test Plan Please describe: - tests you ran to verify your changes with result summaries. - provide instructions so it can be reproduced. ## Sources Please link relevant resources if necessary. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Ran pre-commit to handle lint / formatting issues. - [ ] Read the [contributor guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md), Pull Request section? - [ ] Updated relevant documentation. - [ ] Wrote necessary unit or integration tests.
This commit is contained in:
parent
787e2034b7
commit
15dee2b8b8
2 changed files with 1 additions and 9 deletions
|
@ -132,15 +132,6 @@
|
|||
" return Agent(client, agent_config)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "iMVYso6_xoDV"
|
||||
},
|
||||
"source": [
|
||||
"Quickly and easily get a free Together.ai API key [here](https://api.together.ai) and replace \"YOUR_TOGETHER_API_KEY\" below with it."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue