llama-stack/docs/source
Christina Xu 65cc971877
docs: Add TrustyAI LM-Eval to list of known external providers (#2020)
# What does this PR do?
Adds documentation for the remote [TrustyAI LM-Eval Eval
Provider](https://github.com/trustyai-explainability/llama-stack-provider-lmeval).
LM-Eval is a service for large language model evaluation based on the
open source project
[lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness)
and is integrated into the [TrustyAI Kubernetes
Operator](https://trustyai-explainability.github.io/trustyai-site/main/trustyai-operator.html).
2025-05-06 14:11:55 +02:00
..
building_applications fix: remove code interpeter implementation (#2087) 2025-05-01 14:35:08 -07:00
concepts docs: fix typos in evaluation concepts (#1745) 2025-03-21 12:00:53 -07:00
contributing docs: Updating docs to source from CONTRIBUTING.md (#1850) 2025-04-01 14:50:04 +02:00
distributions fix: remove code interpeter implementation (#2087) 2025-05-01 14:35:08 -07:00
getting_started docs: Update docs and fix warning in start-stack.sh (#1937) 2025-04-11 16:26:17 -07:00
introduction docs: Remove mentions of focus on Llama models (#1690) 2025-03-19 00:17:22 -04:00
playground chore: simplify running the demo UI (#1907) 2025-04-09 11:22:29 -07:00
providers docs: Add TrustyAI LM-Eval to list of known external providers (#2020) 2025-05-06 14:11:55 +02:00
references feat(api): (1/n) datasets api clean up (#1573) 2025-03-17 16:55:45 -07:00
conf.py chore: Detect browser setting for dark/light mode and set default to light mode (#1913) 2025-04-09 12:40:56 -04:00
index.md docs: fixes to quick start (#1943) 2025-04-11 13:41:23 -07:00