Fixed multiple typos (#878)

# What does this PR do?

In short, provide a summary of what this PR does and why. Usually, the
relevant context should be present in a linked issue.

- [ ] Addresses issue (#issue)


## Test Plan

Please describe:
 - tests you ran to verify your changes with result summaries.
 - provide instructions so it can be reproduced.


## Sources

Please link relevant resources if necessary.


## Before submitting

- [x] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
This commit is contained in:
Bakunga Bronson 2025-01-25 06:45:43 +08:00 committed by GitHub
parent 33113139e8
commit 7de46e40f9
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
3 changed files with 3 additions and 3 deletions

View file

@ -4,7 +4,7 @@ Llama Stack provides all the building blocks needed to create sophisticated AI a
The best way to get started is to look at this notebook which walks through the various APIs (from basic inference, to RAG agents) and how to use them. The best way to get started is to look at this notebook which walks through the various APIs (from basic inference, to RAG agents) and how to use them.
**Notebook**: [Building AI Applications](docs/notebooks/Llama_Stack_Building_AI_Applications.ipynb) **Notebook**: [Building AI Applications](https://github.com/meta-llama/llama-stack/blob/main/docs/notebooks/Llama_Stack_Benchmark_Evals.ipynb)
Here are some key topics that will help you build effective agents: Here are some key topics that will help you build effective agents:

View file

@ -142,7 +142,7 @@ config = AgentConfig(
) )
``` ```
Refer to [llama-stack-apps](https://github.com/meta-llama/llama-stack-apps/blob/main/examples/agents/e2e_loop_with_custom_tools.py) for an example of how to use client provided tools. Refer to [llama-stack-apps](https://github.com/meta-llama/llama-stack-apps/blob/main/examples/agents/e2e_loop_with_client_tools.py) for an example of how to use client provided tools.
## Tool Structure ## Tool Structure

View file

@ -16,7 +16,7 @@ Which templates / distributions to choose depends on the hardware you have for r
- {dockerhub}`distribution-tgi` ([Guide](self_hosted_distro/tgi)) - {dockerhub}`distribution-tgi` ([Guide](self_hosted_distro/tgi))
- {dockerhub}`distribution-nvidia` ([Guide](self_hosted_distro/nvidia)) - {dockerhub}`distribution-nvidia` ([Guide](self_hosted_distro/nvidia))
- **Are you running on a "regular" desktop or laptop ?** We suggest using the ollama templte for quick prototyping and get started without having to worry about needing GPUs. - **Are you running on a "regular" desktop or laptop ?** We suggest using the ollama template for quick prototyping and get started without having to worry about needing GPUs.
- {dockerhub}`distribution-ollama` ([link](self_hosted_distro/ollama)) - {dockerhub}`distribution-ollama` ([link](self_hosted_distro/ollama))
- **Do you have an API key for a remote inference provider like Fireworks, Together, etc.?** If so, we suggest: - **Do you have an API key for a remote inference provider like Fireworks, Together, etc.?** If so, we suggest: