From 05ff4c4420406853f1123bd6c8603104c44be2dd Mon Sep 17 00:00:00 2001 From: Alexey Rybak <50731695+reluctantfuturist@users.noreply.github.com> Date: Wed, 24 Sep 2025 14:03:41 -0700 Subject: [PATCH] docs: advanced_apis migration (#3532) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit # What does this PR do? - Migrates the `advanced_apis/`​ section of the docs to the new format ## Test Plan - Partial migration --- docs/docs/advanced_apis/evaluation.mdx | 163 ++++++++++ docs/docs/advanced_apis/post_training.mdx | 305 ++++++++++++++++++ docs/docs/advanced_apis/scoring.mdx | 193 +++++++++++ docs/source/advanced_apis/eval/index.md | 6 - .../eval/inline_meta-reference.md | 25 -- .../advanced_apis/eval/remote_nvidia.md | 23 -- .../advanced_apis/evaluation_concepts.md | 77 ----- docs/source/advanced_apis/index.md | 33 -- .../post_training/huggingface.md | 122 ------- .../advanced_apis/post_training/index.md | 7 - .../post_training/inline_huggingface.md | 40 --- .../post_training/inline_torchtune.md | 25 -- .../post_training/nvidia_nemo.md | 163 ---------- .../post_training/remote_nvidia.md | 32 -- .../advanced_apis/post_training/torchtune.md | 125 ------- docs/source/advanced_apis/scoring/index.md | 7 - .../advanced_apis/scoring/inline_basic.md | 17 - .../scoring/inline_braintrust.md | 23 -- .../scoring/inline_llm-as-judge.md | 17 - 19 files changed, 661 insertions(+), 742 deletions(-) create mode 100644 docs/docs/advanced_apis/evaluation.mdx create mode 100644 docs/docs/advanced_apis/post_training.mdx create mode 100644 docs/docs/advanced_apis/scoring.mdx delete mode 100644 docs/source/advanced_apis/eval/index.md delete mode 100644 docs/source/advanced_apis/eval/inline_meta-reference.md delete mode 100644 docs/source/advanced_apis/eval/remote_nvidia.md delete mode 100644 docs/source/advanced_apis/evaluation_concepts.md delete mode 100644 docs/source/advanced_apis/index.md delete mode 100644 docs/source/advanced_apis/post_training/huggingface.md delete mode 100644 docs/source/advanced_apis/post_training/index.md delete mode 100644 docs/source/advanced_apis/post_training/inline_huggingface.md delete mode 100644 docs/source/advanced_apis/post_training/inline_torchtune.md delete mode 100644 docs/source/advanced_apis/post_training/nvidia_nemo.md delete mode 100644 docs/source/advanced_apis/post_training/remote_nvidia.md delete mode 100644 docs/source/advanced_apis/post_training/torchtune.md delete mode 100644 docs/source/advanced_apis/scoring/index.md delete mode 100644 docs/source/advanced_apis/scoring/inline_basic.md delete mode 100644 docs/source/advanced_apis/scoring/inline_braintrust.md delete mode 100644 docs/source/advanced_apis/scoring/inline_llm-as-judge.md diff --git a/docs/docs/advanced_apis/evaluation.mdx b/docs/docs/advanced_apis/evaluation.mdx new file mode 100644 index 000000000..1efaa4c5c --- /dev/null +++ b/docs/docs/advanced_apis/evaluation.mdx @@ -0,0 +1,163 @@ +# Evaluation + +## Evaluation Concepts + +The Llama Stack Evaluation flow allows you to run evaluations on your GenAI application datasets or pre-registered benchmarks. + +We introduce a set of APIs in Llama Stack for supporting running evaluations of LLM applications: +- `/datasetio` + `/datasets` API +- `/scoring` + `/scoring_functions` API +- `/eval` + `/benchmarks` API + +This guide goes over the sets of APIs and developer experience flow of using Llama Stack to run evaluations for different use cases. Checkout our Colab notebook on working examples with evaluations [here](https://colab.research.google.com/drive/10CHyykee9j2OigaIcRv47BKG9mrNm0tJ?usp=sharing). + +The Evaluation APIs are associated with a set of Resources. Please visit the Resources section in our [Core Concepts](../concepts/index.mdx) guide for better high-level understanding. + +- **DatasetIO**: defines interface with datasets and data loaders. + - Associated with `Dataset` resource. +- **Scoring**: evaluate outputs of the system. + - Associated with `ScoringFunction` resource. We provide a suite of out-of-the box scoring functions and also the ability for you to add custom evaluators. These scoring functions are the core part of defining an evaluation task to output evaluation metrics. +- **Eval**: generate outputs (via Inference or Agents) and perform scoring. + - Associated with `Benchmark` resource. + +## Evaluation Providers + +Llama Stack provides multiple evaluation providers: + +- **Meta Reference** (`inline::meta-reference`) - Meta's reference implementation with multi-language support +- **NVIDIA** (`remote::nvidia`) - NVIDIA's evaluation platform integration + +### Meta Reference + +Meta's reference implementation of evaluation tasks with support for multiple languages and evaluation metrics. + +#### Configuration + +| Field | Type | Required | Default | Description | +|-------|------|----------|---------|-------------| +| `kvstore` | `RedisKVStoreConfig \| SqliteKVStoreConfig \| PostgresKVStoreConfig \| MongoDBKVStoreConfig` | No | sqlite | Key-value store configuration | + +#### Sample Configuration + +```yaml +kvstore: + type: sqlite + db_path: ${env.SQLITE_STORE_DIR:=~/.llama/dummy}/meta_reference_eval.db +``` + +#### Features + +- Multi-language evaluation support +- Comprehensive evaluation metrics +- Integration with various key-value stores (SQLite, Redis, PostgreSQL, MongoDB) +- Built-in support for popular benchmarks + +### NVIDIA + +NVIDIA's evaluation provider for running evaluation tasks on NVIDIA's platform. + +#### Configuration + +| Field | Type | Required | Default | Description | +|-------|------|----------|---------|-------------| +| `evaluator_url` | `str` | No | http://0.0.0.0:7331 | The url for accessing the evaluator service | + +#### Sample Configuration + +```yaml +evaluator_url: ${env.NVIDIA_EVALUATOR_URL:=http://localhost:7331} +``` + +#### Features + +- Integration with NVIDIA's evaluation platform +- Remote evaluation capabilities +- Scalable evaluation processing + +## Open-benchmark Eval + +### List of open-benchmarks Llama Stack support + +Llama stack pre-registers several popular open-benchmarks to easily evaluate model performance via CLI. + +The list of open-benchmarks we currently support: +- [MMLU-COT](https://arxiv.org/abs/2009.03300) (Measuring Massive Multitask Language Understanding): Benchmark designed to comprehensively evaluate the breadth and depth of a model's academic and professional understanding +- [GPQA-COT](https://arxiv.org/abs/2311.12022) (A Graduate-Level Google-Proof Q&A Benchmark): A challenging benchmark of 448 multiple-choice questions written by domain experts in biology, physics, and chemistry. +- [SimpleQA](https://openai.com/index/introducing-simpleqa/): Benchmark designed to access models to answer short, fact-seeking questions. +- [MMMU](https://arxiv.org/abs/2311.16502) (A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI): Benchmark designed to evaluate multimodal models. + +You can follow this [contributing guide](../references/evals_reference/index.mdx#open-benchmark-contributing-guide) to add more open-benchmarks to Llama Stack + +### Run evaluation on open-benchmarks via CLI + +We have built-in functionality to run the supported open-benchmarks using llama-stack-client CLI + +#### Spin up Llama Stack server + +Spin up llama stack server with 'open-benchmark' template +``` +llama stack run llama_stack/distributions/open-benchmark/run.yaml + +``` + +#### Run eval CLI +There are 3 necessary inputs to run a benchmark eval +- `list of benchmark_ids`: The list of benchmark ids to run evaluation on +- `model-id`: The model id to evaluate on +- `output_dir`: Path to store the evaluate results +``` +llama-stack-client eval run-benchmark ... \ +--model_id \ +--output_dir +``` + +You can run +``` +llama-stack-client eval run-benchmark help +``` +to see the description of all the flags that eval run-benchmark has + +In the output log, you can find the file path that has your evaluation results. Open that file and you can see you aggregate evaluation results over there. + +## Usage Example + +Here's a basic example of using the evaluation API: + +```python +from llama_stack_client import LlamaStackClient + +client = LlamaStackClient(base_url="http://localhost:8321") + +# Register a dataset for evaluation +client.datasets.register( + purpose="evaluation", + source={ + "type": "uri", + "uri": "huggingface://datasets/llamastack/evaluation_dataset" + }, + dataset_id="my_eval_dataset" +) + +# Run evaluation +eval_result = client.eval.run_evaluation( + dataset_id="my_eval_dataset", + scoring_functions=["accuracy", "bleu"], + model_id="my_model" +) + +print(f"Evaluation completed: {eval_result}") +``` + +## Best Practices + +- **Choose appropriate providers**: Use Meta Reference for comprehensive evaluation, NVIDIA for platform-specific needs +- **Configure storage properly**: Ensure your key-value store configuration matches your performance requirements +- **Monitor evaluation progress**: Large evaluations can take time - implement proper monitoring +- **Use appropriate scoring functions**: Select scoring metrics that align with your evaluation goals + +## What's Next? + +- Check out our Colab notebook on working examples with running benchmark evaluations [here](https://colab.research.google.com/github/meta-llama/llama-stack/blob/main/docs/notebooks/Llama_Stack_Benchmark_Evals.ipynb#scrollTo=mxLCsP4MvFqP). +- Check out our [Building Applications - Evaluation](../building_applications/evals.mdx) guide for more details on how to use the Evaluation APIs to evaluate your applications. +- Check out our [Evaluation Reference](../references/evals_reference/index.mdx) for more details on the APIs. +- Explore the [Scoring](./scoring.mdx) documentation for available scoring functions. diff --git a/docs/docs/advanced_apis/post_training.mdx b/docs/docs/advanced_apis/post_training.mdx new file mode 100644 index 000000000..43359d741 --- /dev/null +++ b/docs/docs/advanced_apis/post_training.mdx @@ -0,0 +1,305 @@ +# Post-Training + +Post-training in Llama Stack allows you to fine-tune models using various providers and frameworks. This section covers all available post-training providers and how to use them effectively. + +## Overview + +Llama Stack provides multiple post-training providers: + +- **HuggingFace SFTTrainer** (`inline::huggingface`) - Fine-tuning using HuggingFace ecosystem +- **TorchTune** (`inline::torchtune`) - Fine-tuning using Meta's TorchTune framework +- **NVIDIA** (`remote::nvidia`) - Fine-tuning using NVIDIA's platform + +## HuggingFace SFTTrainer + +[HuggingFace SFTTrainer](https://huggingface.co/docs/trl/en/sft_trainer) is an inline post training provider for Llama Stack. It allows you to run supervised fine tuning on a variety of models using many datasets. + +### Features + +- Simple access through the post_training API +- Fully integrated with Llama Stack +- GPU support, CPU support, and MPS support (MacOS Metal Performance Shaders) + +### Configuration + +| Field | Type | Required | Default | Description | +|-------|------|----------|---------|-------------| +| `device` | `str` | No | cuda | | +| `distributed_backend` | `Literal['fsdp', 'deepspeed']` | No | | | +| `checkpoint_format` | `Literal['full_state', 'huggingface']` | No | huggingface | | +| `chat_template` | `str` | No | | +| `model_specific_config` | `dict` | No | `{'trust_remote_code': True, 'attn_implementation': 'sdpa'}` | | +| `max_seq_length` | `int` | No | 2048 | | +| `gradient_checkpointing` | `bool` | No | False | | +| `save_total_limit` | `int` | No | 3 | | +| `logging_steps` | `int` | No | 10 | | +| `warmup_ratio` | `float` | No | 0.1 | | +| `weight_decay` | `float` | No | 0.01 | | +| `dataloader_num_workers` | `int` | No | 4 | | +| `dataloader_pin_memory` | `bool` | No | True | | + +### Sample Configuration + +```yaml +checkpoint_format: huggingface +distributed_backend: null +device: cpu +``` + +### Setup + +You can access the HuggingFace trainer via the `starter` distribution: + +```bash +llama stack build --distro starter --image-type venv +llama stack run --image-type venv ~/.llama/distributions/starter/starter-run.yaml +``` + +### Usage Example + +```python +import time +import uuid + +from llama_stack_client.types import ( + post_training_supervised_fine_tune_params, + algorithm_config_param, +) + +def create_http_client(): + from llama_stack_client import LlamaStackClient + return LlamaStackClient(base_url="http://localhost:8321") + +client = create_http_client() + +# Example Dataset +client.datasets.register( + purpose="post-training/messages", + source={ + "type": "uri", + "uri": "huggingface://datasets/llamastack/simpleqa?split=train", + }, + dataset_id="simpleqa", +) + +training_config = post_training_supervised_fine_tune_params.TrainingConfig( + data_config=post_training_supervised_fine_tune_params.TrainingConfigDataConfig( + batch_size=32, + data_format="instruct", + dataset_id="simpleqa", + shuffle=True, + ), + gradient_accumulation_steps=1, + max_steps_per_epoch=0, + max_validation_steps=1, + n_epochs=4, +) + +algorithm_config = algorithm_config_param.LoraFinetuningConfig( + alpha=1, + apply_lora_to_mlp=True, + apply_lora_to_output=False, + lora_attn_modules=["q_proj"], + rank=1, + type="LoRA", +) + +job_uuid = f"test-job{uuid.uuid4()}" + +# Example Model +training_model = "ibm-granite/granite-3.3-8b-instruct" + +start_time = time.time() +response = client.post_training.supervised_fine_tune( + job_uuid=job_uuid, + logger_config={}, + model=training_model, + hyperparam_search_config={}, + training_config=training_config, + algorithm_config=algorithm_config, + checkpoint_dir="output", +) +print("Job: ", job_uuid) + +# Wait for the job to complete! +while True: + status = client.post_training.job.status(job_uuid=job_uuid) + if not status: + print("Job not found") + break + + print(status) + if status.status == "completed": + break + + print("Waiting for job to complete...") + time.sleep(5) + +end_time = time.time() +print("Job completed in", end_time - start_time, "seconds!") + +print("Artifacts:") +print(client.post_training.job.artifacts(job_uuid=job_uuid)) +``` + +## TorchTune + +[TorchTune](https://github.com/pytorch/torchtune) is an inline post training provider for Llama Stack. It provides a simple and efficient way to fine-tune language models using PyTorch. + +### Features + +- Simple access through the post_training API +- Fully integrated with Llama Stack +- GPU support and single device capabilities +- Support for LoRA + +### Configuration + +| Field | Type | Required | Default | Description | +|-------|------|----------|---------|-------------| +| `torch_seed` | `int \| None` | No | | | +| `checkpoint_format` | `Literal['meta', 'huggingface']` | No | meta | | + +### Sample Configuration + +```yaml +checkpoint_format: meta +``` + +### Setup + +You can access the TorchTune trainer by writing your own yaml pointing to the provider: + +```yaml +post_training: + - provider_id: torchtune + provider_type: inline::torchtune + config: {} +``` + +You can then build and run your own stack with this provider. + +### Usage Example + +```python +import time +import uuid + +from llama_stack_client.types import ( + post_training_supervised_fine_tune_params, + algorithm_config_param, +) + +def create_http_client(): + from llama_stack_client import LlamaStackClient + return LlamaStackClient(base_url="http://localhost:8321") + +client = create_http_client() + +# Example Dataset +client.datasets.register( + purpose="post-training/messages", + source={ + "type": "uri", + "uri": "huggingface://datasets/llamastack/simpleqa?split=train", + }, + dataset_id="simpleqa", +) + +training_config = post_training_supervised_fine_tune_params.TrainingConfig( + data_config=post_training_supervised_fine_tune_params.TrainingConfigDataConfig( + batch_size=32, + data_format="instruct", + dataset_id="simpleqa", + shuffle=True, + ), + gradient_accumulation_steps=1, + max_steps_per_epoch=0, + max_validation_steps=1, + n_epochs=4, +) + +algorithm_config = algorithm_config_param.LoraFinetuningConfig( + alpha=1, + apply_lora_to_mlp=True, + apply_lora_to_output=False, + lora_attn_modules=["q_proj"], + rank=1, + type="LoRA", +) + +job_uuid = f"test-job{uuid.uuid4()}" + +# Example Model +training_model = "meta-llama/Llama-2-7b-hf" + +start_time = time.time() +response = client.post_training.supervised_fine_tune( + job_uuid=job_uuid, + logger_config={}, + model=training_model, + hyperparam_search_config={}, + training_config=training_config, + algorithm_config=algorithm_config, + checkpoint_dir="output", +) +print("Job: ", job_uuid) + +# Wait for the job to complete! +while True: + status = client.post_training.job.status(job_uuid=job_uuid) + if not status: + print("Job not found") + break + + print(status) + if status.status == "completed": + break + + print("Waiting for job to complete...") + time.sleep(5) + +end_time = time.time() +print("Job completed in", end_time - start_time, "seconds!") + +print("Artifacts:") +print(client.post_training.job.artifacts(job_uuid=job_uuid)) +``` + +## NVIDIA + +NVIDIA's post-training provider for fine-tuning models on NVIDIA's platform. + +### Configuration + +| Field | Type | Required | Default | Description | +|-------|------|----------|---------|-------------| +| `api_key` | `str \| None` | No | | The NVIDIA API key. | +| `dataset_namespace` | `str \| None` | No | default | The NVIDIA dataset namespace. | +| `project_id` | `str \| None` | No | test-example-model@v1 | The NVIDIA project ID. | +| `customizer_url` | `str \| None` | No | | Base URL for the NeMo Customizer API | +| `timeout` | `int` | No | 300 | Timeout for the NVIDIA Post Training API | +| `max_retries` | `int` | No | 3 | Maximum number of retries for the NVIDIA Post Training API | +| `output_model_dir` | `str` | No | test-example-model@v1 | Directory to save the output model | + +### Sample Configuration + +```yaml +api_key: ${env.NVIDIA_API_KEY:=} +dataset_namespace: ${env.NVIDIA_DATASET_NAMESPACE:=default} +project_id: ${env.NVIDIA_PROJECT_ID:=test-project} +customizer_url: ${env.NVIDIA_CUSTOMIZER_URL:=http://nemo.test} +``` + +## Best Practices + +- **Choose the right provider**: Use HuggingFace for broader compatibility, TorchTune for Meta models, or NVIDIA for their ecosystem +- **Configure hardware appropriately**: Ensure your configuration matches your available hardware (CPU, GPU, MPS) +- **Monitor jobs**: Always monitor job status and handle completion appropriately +- **Use appropriate datasets**: Ensure your dataset format matches the expected input format for your chosen provider + +## Next Steps + +- Check out the [Building Applications - Fine-tuning](../building_applications/index.mdx) guide for application-level examples +- See the [Providers](../providers/post_training/index.mdx) section for detailed provider documentation +- Review the [API Reference](../api_reference/post_training.mdx) for complete API documentation diff --git a/docs/docs/advanced_apis/scoring.mdx b/docs/docs/advanced_apis/scoring.mdx new file mode 100644 index 000000000..15c09fa8a --- /dev/null +++ b/docs/docs/advanced_apis/scoring.mdx @@ -0,0 +1,193 @@ +# Scoring + +The Scoring API in Llama Stack allows you to evaluate outputs of your GenAI system using various scoring functions and metrics. This section covers all available scoring providers and their configuration. + +## Overview + +Llama Stack provides multiple scoring providers: + +- **Basic** (`inline::basic`) - Simple evaluation metrics and scoring functions +- **Braintrust** (`inline::braintrust`) - Advanced evaluation using the Braintrust platform +- **LLM-as-Judge** (`inline::llm-as-judge`) - Uses language models to evaluate responses + +The Scoring API is associated with `ScoringFunction` resources and provides a suite of out-of-the-box scoring functions. You can also add custom evaluators to meet specific evaluation needs. + +## Basic Scoring + +Basic scoring provider for simple evaluation metrics and scoring functions. This provider offers fundamental scoring capabilities without external dependencies. + +### Configuration + +No configuration required - this provider works out of the box. + +```yaml +{} +``` + +### Features + +- Simple evaluation metrics (accuracy, precision, recall, F1-score) +- String matching and similarity metrics +- Basic statistical scoring functions +- No external dependencies required +- Fast execution for standard metrics + +### Use Cases + +- Quick evaluation of basic accuracy metrics +- String similarity comparisons +- Statistical analysis of model outputs +- Development and testing scenarios + +## Braintrust + +Braintrust scoring provider for evaluation and scoring using the [Braintrust platform](https://braintrustdata.com/). Braintrust provides advanced evaluation capabilities and experiment tracking. + +### Configuration + +| Field | Type | Required | Default | Description | +|-------|------|----------|---------|-------------| +| `openai_api_key` | `str \| None` | No | | The OpenAI API Key for LLM-powered evaluations | + +### Sample Configuration + +```yaml +openai_api_key: ${env.OPENAI_API_KEY:=} +``` + +### Features + +- Advanced evaluation metrics +- Experiment tracking and comparison +- LLM-powered evaluation functions +- Integration with Braintrust's evaluation suite +- Detailed scoring analytics and insights + +### Use Cases + +- Production evaluation pipelines +- A/B testing of model versions +- Advanced scoring with custom metrics +- Detailed evaluation reporting and analysis + +## LLM-as-Judge + +LLM-as-judge scoring provider that uses language models to evaluate and score responses. This approach leverages the reasoning capabilities of large language models to assess quality, relevance, and other subjective metrics. + +### Configuration + +No configuration required - this provider works out of the box. + +```yaml +{} +``` + +### Features + +- Subjective quality evaluation using LLMs +- Flexible evaluation criteria definition +- Natural language evaluation explanations +- Support for complex evaluation scenarios +- Contextual understanding of responses + +### Use Cases + +- Evaluating response quality and relevance +- Assessing creativity and coherence +- Subjective metric evaluation +- Human-like judgment for complex tasks + +## Usage Examples + +### Basic Scoring Example + +```python +from llama_stack_client import LlamaStackClient + +client = LlamaStackClient(base_url="http://localhost:8321") + +# Register a basic accuracy scoring function +client.scoring_functions.register( + scoring_function_id="basic_accuracy", + provider_id="basic", + provider_scoring_function_id="accuracy" +) + +# Use the scoring function +result = client.scoring.score( + input_rows=[ + {"expected": "Paris", "actual": "Paris"}, + {"expected": "London", "actual": "Paris"} + ], + scoring_function_id="basic_accuracy" +) +print(f"Accuracy: {result.results[0].score}") +``` + +### LLM-as-Judge Example + +```python +# Register an LLM-as-judge scoring function +client.scoring_functions.register( + scoring_function_id="quality_judge", + provider_id="llm_judge", + provider_scoring_function_id="response_quality", + params={ + "criteria": "Evaluate response quality, relevance, and helpfulness", + "scale": "1-10" + } +) + +# Score responses using LLM judgment +result = client.scoring.score( + input_rows=[{ + "query": "What is machine learning?", + "response": "Machine learning is a subset of AI that enables computers to learn patterns from data..." + }], + scoring_function_id="quality_judge" +) +``` + +### Braintrust Integration Example + +```python +# Register a Braintrust scoring function +client.scoring_functions.register( + scoring_function_id="braintrust_eval", + provider_id="braintrust", + provider_scoring_function_id="semantic_similarity" +) + +# Run evaluation with Braintrust +result = client.scoring.score( + input_rows=[{ + "reference": "The capital of France is Paris", + "candidate": "Paris is the capital city of France" + }], + scoring_function_id="braintrust_eval" +) +``` + +## Best Practices + +- **Choose appropriate providers**: Use Basic for simple metrics, Braintrust for advanced analytics, LLM-as-Judge for subjective evaluation +- **Define clear criteria**: When using LLM-as-Judge, provide specific evaluation criteria and scales +- **Validate scoring functions**: Test your scoring functions with known examples before production use +- **Monitor performance**: Track scoring performance and adjust thresholds based on results +- **Combine multiple metrics**: Use different scoring providers together for comprehensive evaluation + +## Integration with Evaluation + +The Scoring API works closely with the [Evaluation](./evaluation.mdx) API to provide comprehensive evaluation workflows: + +1. **Datasets** are loaded via the DatasetIO API +2. **Evaluation** generates model outputs using the Eval API +3. **Scoring** evaluates the quality of outputs using various scoring functions +4. **Results** are aggregated and reported for analysis + +## Next Steps + +- Check out the [Evaluation](./evaluation.mdx) guide for running complete evaluations +- See the [Building Applications - Evaluation](../building_applications/evals.mdx) guide for application examples +- Review the [Evaluation Reference](../references/evals_reference.mdx) for comprehensive scoring function usage +- Explore the [Evaluation Concepts](../concepts/evaluation_concepts.mdx) for detailed conceptual information diff --git a/docs/source/advanced_apis/eval/index.md b/docs/source/advanced_apis/eval/index.md deleted file mode 100644 index 330380670..000000000 --- a/docs/source/advanced_apis/eval/index.md +++ /dev/null @@ -1,6 +0,0 @@ -# Eval Providers - -This section contains documentation for all available providers for the **eval** API. - -- [inline::meta-reference](inline_meta-reference.md) -- [remote::nvidia](remote_nvidia.md) \ No newline at end of file diff --git a/docs/source/advanced_apis/eval/inline_meta-reference.md b/docs/source/advanced_apis/eval/inline_meta-reference.md deleted file mode 100644 index 5bec89cfc..000000000 --- a/docs/source/advanced_apis/eval/inline_meta-reference.md +++ /dev/null @@ -1,25 +0,0 @@ ---- -orphan: true ---- - -# inline::meta-reference - -## Description - -Meta's reference implementation of evaluation tasks with support for multiple languages and evaluation metrics. - -## Configuration - -| Field | Type | Required | Default | Description | -|-------|------|----------|---------|-------------| -| `kvstore` | `utils.kvstore.config.RedisKVStoreConfig \| utils.kvstore.config.SqliteKVStoreConfig \| utils.kvstore.config.PostgresKVStoreConfig \| utils.kvstore.config.MongoDBKVStoreConfig` | No | sqlite | | - -## Sample Configuration - -```yaml -kvstore: - type: sqlite - db_path: ${env.SQLITE_STORE_DIR:=~/.llama/dummy}/meta_reference_eval.db - -``` - diff --git a/docs/source/advanced_apis/eval/remote_nvidia.md b/docs/source/advanced_apis/eval/remote_nvidia.md deleted file mode 100644 index ab91767d6..000000000 --- a/docs/source/advanced_apis/eval/remote_nvidia.md +++ /dev/null @@ -1,23 +0,0 @@ ---- -orphan: true ---- - -# remote::nvidia - -## Description - -NVIDIA's evaluation provider for running evaluation tasks on NVIDIA's platform. - -## Configuration - -| Field | Type | Required | Default | Description | -|-------|------|----------|---------|-------------| -| `evaluator_url` | `` | No | http://0.0.0.0:7331 | The url for accessing the evaluator service | - -## Sample Configuration - -```yaml -evaluator_url: ${env.NVIDIA_EVALUATOR_URL:=http://localhost:7331} - -``` - diff --git a/docs/source/advanced_apis/evaluation_concepts.md b/docs/source/advanced_apis/evaluation_concepts.md deleted file mode 100644 index 52ad53ece..000000000 --- a/docs/source/advanced_apis/evaluation_concepts.md +++ /dev/null @@ -1,77 +0,0 @@ -## Evaluation Concepts - -The Llama Stack Evaluation flow allows you to run evaluations on your GenAI application datasets or pre-registered benchmarks. - -We introduce a set of APIs in Llama Stack for supporting running evaluations of LLM applications. -- `/datasetio` + `/datasets` API -- `/scoring` + `/scoring_functions` API -- `/eval` + `/benchmarks` API - -This guide goes over the sets of APIs and developer experience flow of using Llama Stack to run evaluations for different use cases. Checkout our Colab notebook on working examples with evaluations [here](https://colab.research.google.com/drive/10CHyykee9j2OigaIcRv47BKG9mrNm0tJ?usp=sharing). - - -The Evaluation APIs are associated with a set of Resources. Please visit the Resources section in our [Core Concepts](../concepts/index.md) guide for better high-level understanding. - -- **DatasetIO**: defines interface with datasets and data loaders. - - Associated with `Dataset` resource. -- **Scoring**: evaluate outputs of the system. - - Associated with `ScoringFunction` resource. We provide a suite of out-of-the box scoring functions and also the ability for you to add custom evaluators. These scoring functions are the core part of defining an evaluation task to output evaluation metrics. -- **Eval**: generate outputs (via Inference or Agents) and perform scoring. - - Associated with `Benchmark` resource. - - -### Open-benchmark Eval - -#### List of open-benchmarks Llama Stack support - -Llama stack pre-registers several popular open-benchmarks to easily evaluate model perfomance via CLI. - -The list of open-benchmarks we currently support: -- [MMLU-COT](https://arxiv.org/abs/2009.03300) (Measuring Massive Multitask Language Understanding): Benchmark designed to comprehensively evaluate the breadth and depth of a model's academic and professional understanding -- [GPQA-COT](https://arxiv.org/abs/2311.12022) (A Graduate-Level Google-Proof Q&A Benchmark): A challenging benchmark of 448 multiple-choice questions written by domain experts in biology, physics, and chemistry. -- [SimpleQA](https://openai.com/index/introducing-simpleqa/): Benchmark designed to access models to answer short, fact-seeking questions. -- [MMMU](https://arxiv.org/abs/2311.16502) (A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI)]: Benchmark designed to evaluate multimodal models. - - -You can follow this [contributing guide](../references/evals_reference/index.md#open-benchmark-contributing-guide) to add more open-benchmarks to Llama Stack - -#### Run evaluation on open-benchmarks via CLI - -We have built-in functionality to run the supported open-benckmarks using llama-stack-client CLI - -#### Spin up Llama Stack server - -Spin up llama stack server with 'open-benchmark' template -``` -llama stack run llama_stack/distributions/open-benchmark/run.yaml - -``` - -#### Run eval CLI -There are 3 necessary inputs to run a benchmark eval -- `list of benchmark_ids`: The list of benchmark ids to run evaluation on -- `model-id`: The model id to evaluate on -- `output_dir`: Path to store the evaluate results -``` -llama-stack-client eval run-benchmark ... \ ---model_id \ ---output_dir \ -``` - -You can run -``` -llama-stack-client eval run-benchmark help -``` -to see the description of all the flags that eval run-benchmark has - - -In the output log, you can find the file path that has your evaluation results. Open that file and you can see you aggregate -evaluation results over there. - - - -#### What's Next? - -- Check out our Colab notebook on working examples with running benchmark evaluations [here](https://colab.research.google.com/github/meta-llama/llama-stack/blob/main/docs/notebooks/Llama_Stack_Benchmark_Evals.ipynb#scrollTo=mxLCsP4MvFqP). -- Check out our [Building Applications - Evaluation](../building_applications/evals.md) guide for more details on how to use the Evaluation APIs to evaluate your applications. -- Check out our [Evaluation Reference](../references/evals_reference/index.md) for more details on the APIs. diff --git a/docs/source/advanced_apis/index.md b/docs/source/advanced_apis/index.md deleted file mode 100644 index b10672c29..000000000 --- a/docs/source/advanced_apis/index.md +++ /dev/null @@ -1,33 +0,0 @@ -# Advanced APIs - -## Post-training -Fine-tunes a model. - -```{toctree} -:maxdepth: 1 - -post_training/index -``` - -## Eval -Generates outputs (via Inference or Agents) and perform scoring. - -```{toctree} -:maxdepth: 1 - -eval/index -``` - -```{include} evaluation_concepts.md -:start-after: ## Evaluation Concepts -``` - -## Scoring -Evaluates the outputs of the system. - -```{toctree} -:maxdepth: 1 - -scoring/index -``` - diff --git a/docs/source/advanced_apis/post_training/huggingface.md b/docs/source/advanced_apis/post_training/huggingface.md deleted file mode 100644 index a7609d6da..000000000 --- a/docs/source/advanced_apis/post_training/huggingface.md +++ /dev/null @@ -1,122 +0,0 @@ ---- -orphan: true ---- -# HuggingFace SFTTrainer - -[HuggingFace SFTTrainer](https://huggingface.co/docs/trl/en/sft_trainer) is an inline post training provider for Llama Stack. It allows you to run supervised fine tuning on a variety of models using many datasets - -## Features - -- Simple access through the post_training API -- Fully integrated with Llama Stack -- GPU support, CPU support, and MPS support (MacOS Metal Performance Shaders) - -## Usage - -To use the HF SFTTrainer in your Llama Stack project, follow these steps: - -1. Configure your Llama Stack project to use this provider. -2. Kick off a SFT job using the Llama Stack post_training API. - -## Setup - -You can access the HuggingFace trainer via the `ollama` distribution: - -```bash -llama stack build --distro starter --image-type venv -llama stack run --image-type venv ~/.llama/distributions/ollama/ollama-run.yaml -``` - -## Run Training - -You can access the provider and the `supervised_fine_tune` method via the post_training API: - -```python -import time -import uuid - - -from llama_stack_client.types import ( - post_training_supervised_fine_tune_params, - algorithm_config_param, -) - - -def create_http_client(): - from llama_stack_client import LlamaStackClient - - return LlamaStackClient(base_url="http://localhost:8321") - - -client = create_http_client() - -# Example Dataset -client.datasets.register( - purpose="post-training/messages", - source={ - "type": "uri", - "uri": "huggingface://datasets/llamastack/simpleqa?split=train", - }, - dataset_id="simpleqa", -) - -training_config = post_training_supervised_fine_tune_params.TrainingConfig( - data_config=post_training_supervised_fine_tune_params.TrainingConfigDataConfig( - batch_size=32, - data_format="instruct", - dataset_id="simpleqa", - shuffle=True, - ), - gradient_accumulation_steps=1, - max_steps_per_epoch=0, - max_validation_steps=1, - n_epochs=4, -) - -algorithm_config = algorithm_config_param.LoraFinetuningConfig( # this config is also currently mandatory but should not be - alpha=1, - apply_lora_to_mlp=True, - apply_lora_to_output=False, - lora_attn_modules=["q_proj"], - rank=1, - type="LoRA", -) - -job_uuid = f"test-job{uuid.uuid4()}" - -# Example Model -training_model = "ibm-granite/granite-3.3-8b-instruct" - -start_time = time.time() -response = client.post_training.supervised_fine_tune( - job_uuid=job_uuid, - logger_config={}, - model=training_model, - hyperparam_search_config={}, - training_config=training_config, - algorithm_config=algorithm_config, - checkpoint_dir="output", -) -print("Job: ", job_uuid) - - -# Wait for the job to complete! -while True: - status = client.post_training.job.status(job_uuid=job_uuid) - if not status: - print("Job not found") - break - - print(status) - if status.status == "completed": - break - - print("Waiting for job to complete...") - time.sleep(5) - -end_time = time.time() -print("Job completed in", end_time - start_time, "seconds!") - -print("Artifacts:") -print(client.post_training.job.artifacts(job_uuid=job_uuid)) -``` diff --git a/docs/source/advanced_apis/post_training/index.md b/docs/source/advanced_apis/post_training/index.md deleted file mode 100644 index 35d10d14b..000000000 --- a/docs/source/advanced_apis/post_training/index.md +++ /dev/null @@ -1,7 +0,0 @@ -# Post_Training Providers - -This section contains documentation for all available providers for the **post_training** API. - -- [inline::huggingface](inline_huggingface.md) -- [inline::torchtune](inline_torchtune.md) -- [remote::nvidia](remote_nvidia.md) \ No newline at end of file diff --git a/docs/source/advanced_apis/post_training/inline_huggingface.md b/docs/source/advanced_apis/post_training/inline_huggingface.md deleted file mode 100644 index 6536b4f8c..000000000 --- a/docs/source/advanced_apis/post_training/inline_huggingface.md +++ /dev/null @@ -1,40 +0,0 @@ ---- -orphan: true ---- - -# inline::huggingface - -## Description - -HuggingFace-based post-training provider for fine-tuning models using the HuggingFace ecosystem. - -## Configuration - -| Field | Type | Required | Default | Description | -|-------|------|----------|---------|-------------| -| `device` | `` | No | cuda | | -| `distributed_backend` | `Literal['fsdp', 'deepspeed'` | No | | | -| `checkpoint_format` | `Literal['full_state', 'huggingface'` | No | huggingface | | -| `chat_template` | `` | No | | -| `model_specific_config` | `` | No | {'trust_remote_code': True, 'attn_implementation': 'sdpa'} | | -| `max_seq_length` | `` | No | 2048 | | -| `gradient_checkpointing` | `` | No | False | | -| `save_total_limit` | `` | No | 3 | | -| `logging_steps` | `` | No | 10 | | -| `warmup_ratio` | `` | No | 0.1 | | -| `weight_decay` | `` | No | 0.01 | | -| `dataloader_num_workers` | `` | No | 4 | | -| `dataloader_pin_memory` | `` | No | True | | - -## Sample Configuration - -```yaml -checkpoint_format: huggingface -distributed_backend: null -device: cpu - -``` - -[Find more detailed information here!](huggingface.md) - - diff --git a/docs/source/advanced_apis/post_training/inline_torchtune.md b/docs/source/advanced_apis/post_training/inline_torchtune.md deleted file mode 100644 index 617975b0d..000000000 --- a/docs/source/advanced_apis/post_training/inline_torchtune.md +++ /dev/null @@ -1,25 +0,0 @@ ---- -orphan: true ---- - -# inline::torchtune - -## Description - -TorchTune-based post-training provider for fine-tuning and optimizing models using Meta's TorchTune framework. - -## Configuration - -| Field | Type | Required | Default | Description | -|-------|------|----------|---------|-------------| -| `torch_seed` | `int \| None` | No | | | -| `checkpoint_format` | `Literal['meta', 'huggingface'` | No | meta | | - -## Sample Configuration - -```yaml -checkpoint_format: meta - -``` - -[Find more detailed information here!](torchtune.md) diff --git a/docs/source/advanced_apis/post_training/nvidia_nemo.md b/docs/source/advanced_apis/post_training/nvidia_nemo.md deleted file mode 100644 index 1a7adbe16..000000000 --- a/docs/source/advanced_apis/post_training/nvidia_nemo.md +++ /dev/null @@ -1,163 +0,0 @@ ---- -orphan: true ---- -# NVIDIA NEMO - -[NVIDIA NEMO](https://developer.nvidia.com/nemo-framework) is a remote post training provider for Llama Stack. It provides enterprise-grade fine-tuning capabilities through NVIDIA's NeMo Customizer service. - -## Features - -- Enterprise-grade fine-tuning capabilities -- Support for LoRA and SFT fine-tuning -- Integration with NVIDIA's NeMo Customizer service -- Support for various NVIDIA-optimized models -- Efficient training with NVIDIA hardware acceleration - -## Usage - -To use NVIDIA NEMO in your Llama Stack project, follow these steps: - -1. Configure your Llama Stack project to use this provider. -2. Set up your NVIDIA API credentials. -3. Kick off a fine-tuning job using the Llama Stack post_training API. - -## Setup - -You'll need to set the following environment variables: - -```bash -export NVIDIA_API_KEY="your-api-key" -export NVIDIA_DATASET_NAMESPACE="default" -export NVIDIA_CUSTOMIZER_URL="your-customizer-url" -export NVIDIA_PROJECT_ID="your-project-id" -export NVIDIA_OUTPUT_MODEL_DIR="your-output-model-dir" -``` - -## Run Training - -You can access the provider and the `supervised_fine_tune` method via the post_training API: - -```python -import time -import uuid - -from llama_stack_client.types import ( - post_training_supervised_fine_tune_params, - algorithm_config_param, -) - - -def create_http_client(): - from llama_stack_client import LlamaStackClient - - return LlamaStackClient(base_url="http://localhost:8321") - - -client = create_http_client() - -# Example Dataset -client.datasets.register( - purpose="post-training/messages", - source={ - "type": "uri", - "uri": "huggingface://datasets/llamastack/simpleqa?split=train", - }, - dataset_id="simpleqa", -) - -training_config = post_training_supervised_fine_tune_params.TrainingConfig( - data_config=post_training_supervised_fine_tune_params.TrainingConfigDataConfig( - batch_size=8, # Default batch size for NEMO - data_format="instruct", - dataset_id="simpleqa", - shuffle=True, - ), - n_epochs=50, # Default epochs for NEMO - optimizer_config=post_training_supervised_fine_tune_params.TrainingConfigOptimizerConfig( - lr=0.0001, # Default learning rate - weight_decay=0.01, # NEMO-specific parameter - ), - # NEMO-specific parameters - log_every_n_steps=None, - val_check_interval=0.25, - sequence_packing_enabled=False, - hidden_dropout=None, - attention_dropout=None, - ffn_dropout=None, -) - -algorithm_config = algorithm_config_param.LoraFinetuningConfig( - alpha=16, # Default alpha for NEMO - type="LoRA", -) - -job_uuid = f"test-job{uuid.uuid4()}" - -# Example Model - must be a supported NEMO model -training_model = "meta/llama-3.1-8b-instruct" - -start_time = time.time() -response = client.post_training.supervised_fine_tune( - job_uuid=job_uuid, - logger_config={}, - model=training_model, - hyperparam_search_config={}, - training_config=training_config, - algorithm_config=algorithm_config, - checkpoint_dir="output", -) -print("Job: ", job_uuid) - -# Wait for the job to complete! -while True: - status = client.post_training.job.status(job_uuid=job_uuid) - if not status: - print("Job not found") - break - - print(status) - if status.status == "completed": - break - - print("Waiting for job to complete...") - time.sleep(5) - -end_time = time.time() -print("Job completed in", end_time - start_time, "seconds!") - -print("Artifacts:") -print(client.post_training.job.artifacts(job_uuid=job_uuid)) -``` - -## Supported Models - -Currently supports the following models: -- meta/llama-3.1-8b-instruct -- meta/llama-3.2-1b-instruct - -## Supported Parameters - -### TrainingConfig -- n_epochs (default: 50) -- data_config -- optimizer_config -- log_every_n_steps -- val_check_interval (default: 0.25) -- sequence_packing_enabled (default: False) -- hidden_dropout (0.0-1.0) -- attention_dropout (0.0-1.0) -- ffn_dropout (0.0-1.0) - -### DataConfig -- dataset_id -- batch_size (default: 8) - -### OptimizerConfig -- lr (default: 0.0001) -- weight_decay (default: 0.01) - -### LoRA Config -- alpha (default: 16) -- type (must be "LoRA") - -Note: Some parameters from the standard Llama Stack API are not supported and will be ignored with a warning. diff --git a/docs/source/advanced_apis/post_training/remote_nvidia.md b/docs/source/advanced_apis/post_training/remote_nvidia.md deleted file mode 100644 index 9840fa3c4..000000000 --- a/docs/source/advanced_apis/post_training/remote_nvidia.md +++ /dev/null @@ -1,32 +0,0 @@ ---- -orphan: true ---- - -# remote::nvidia - -## Description - -NVIDIA's post-training provider for fine-tuning models on NVIDIA's platform. - -## Configuration - -| Field | Type | Required | Default | Description | -|-------|------|----------|---------|-------------| -| `api_key` | `str \| None` | No | | The NVIDIA API key. | -| `dataset_namespace` | `str \| None` | No | default | The NVIDIA dataset namespace. | -| `project_id` | `str \| None` | No | test-example-model@v1 | The NVIDIA project ID. | -| `customizer_url` | `str \| None` | No | | Base URL for the NeMo Customizer API | -| `timeout` | `` | No | 300 | Timeout for the NVIDIA Post Training API | -| `max_retries` | `` | No | 3 | Maximum number of retries for the NVIDIA Post Training API | -| `output_model_dir` | `` | No | test-example-model@v1 | Directory to save the output model | - -## Sample Configuration - -```yaml -api_key: ${env.NVIDIA_API_KEY:=} -dataset_namespace: ${env.NVIDIA_DATASET_NAMESPACE:=default} -project_id: ${env.NVIDIA_PROJECT_ID:=test-project} -customizer_url: ${env.NVIDIA_CUSTOMIZER_URL:=http://nemo.test} - -``` - diff --git a/docs/source/advanced_apis/post_training/torchtune.md b/docs/source/advanced_apis/post_training/torchtune.md deleted file mode 100644 index ef72505b1..000000000 --- a/docs/source/advanced_apis/post_training/torchtune.md +++ /dev/null @@ -1,125 +0,0 @@ ---- -orphan: true ---- -# TorchTune - -[TorchTune](https://github.com/pytorch/torchtune) is an inline post training provider for Llama Stack. It provides a simple and efficient way to fine-tune language models using PyTorch. - -## Features - -- Simple access through the post_training API -- Fully integrated with Llama Stack -- GPU support and single device capabilities. -- Support for LoRA - -## Usage - -To use TorchTune in your Llama Stack project, follow these steps: - -1. Configure your Llama Stack project to use this provider. -2. Kick off a fine-tuning job using the Llama Stack post_training API. - -## Setup - -You can access the TorchTune trainer by writing your own yaml pointing to the provider: - -```yaml -post_training: - - provider_id: torchtune - provider_type: inline::torchtune - config: {} -``` - -you can then build and run your own stack with this provider. - -## Run Training - -You can access the provider and the `supervised_fine_tune` method via the post_training API: - -```python -import time -import uuid - -from llama_stack_client.types import ( - post_training_supervised_fine_tune_params, - algorithm_config_param, -) - - -def create_http_client(): - from llama_stack_client import LlamaStackClient - - return LlamaStackClient(base_url="http://localhost:8321") - - -client = create_http_client() - -# Example Dataset -client.datasets.register( - purpose="post-training/messages", - source={ - "type": "uri", - "uri": "huggingface://datasets/llamastack/simpleqa?split=train", - }, - dataset_id="simpleqa", -) - -training_config = post_training_supervised_fine_tune_params.TrainingConfig( - data_config=post_training_supervised_fine_tune_params.TrainingConfigDataConfig( - batch_size=32, - data_format="instruct", - dataset_id="simpleqa", - shuffle=True, - ), - gradient_accumulation_steps=1, - max_steps_per_epoch=0, - max_validation_steps=1, - n_epochs=4, -) - -algorithm_config = algorithm_config_param.LoraFinetuningConfig( - alpha=1, - apply_lora_to_mlp=True, - apply_lora_to_output=False, - lora_attn_modules=["q_proj"], - rank=1, - type="LoRA", -) - -job_uuid = f"test-job{uuid.uuid4()}" - -# Example Model -training_model = "meta-llama/Llama-2-7b-hf" - -start_time = time.time() -response = client.post_training.supervised_fine_tune( - job_uuid=job_uuid, - logger_config={}, - model=training_model, - hyperparam_search_config={}, - training_config=training_config, - algorithm_config=algorithm_config, - checkpoint_dir="output", -) -print("Job: ", job_uuid) - -# Wait for the job to complete! -while True: - status = client.post_training.job.status(job_uuid=job_uuid) - if not status: - print("Job not found") - break - - print(status) - if status.status == "completed": - break - - print("Waiting for job to complete...") - time.sleep(5) - -end_time = time.time() -print("Job completed in", end_time - start_time, "seconds!") - -print("Artifacts:") -print(client.post_training.job.artifacts(job_uuid=job_uuid)) -``` diff --git a/docs/source/advanced_apis/scoring/index.md b/docs/source/advanced_apis/scoring/index.md deleted file mode 100644 index 3cf7af537..000000000 --- a/docs/source/advanced_apis/scoring/index.md +++ /dev/null @@ -1,7 +0,0 @@ -# Scoring Providers - -This section contains documentation for all available providers for the **scoring** API. - -- [inline::basic](inline_basic.md) -- [inline::braintrust](inline_braintrust.md) -- [inline::llm-as-judge](inline_llm-as-judge.md) \ No newline at end of file diff --git a/docs/source/advanced_apis/scoring/inline_basic.md b/docs/source/advanced_apis/scoring/inline_basic.md deleted file mode 100644 index b56b36013..000000000 --- a/docs/source/advanced_apis/scoring/inline_basic.md +++ /dev/null @@ -1,17 +0,0 @@ ---- -orphan: true ---- - -# inline::basic - -## Description - -Basic scoring provider for simple evaluation metrics and scoring functions. - -## Sample Configuration - -```yaml -{} - -``` - diff --git a/docs/source/advanced_apis/scoring/inline_braintrust.md b/docs/source/advanced_apis/scoring/inline_braintrust.md deleted file mode 100644 index d1278217c..000000000 --- a/docs/source/advanced_apis/scoring/inline_braintrust.md +++ /dev/null @@ -1,23 +0,0 @@ ---- -orphan: true ---- - -# inline::braintrust - -## Description - -Braintrust scoring provider for evaluation and scoring using the Braintrust platform. - -## Configuration - -| Field | Type | Required | Default | Description | -|-------|------|----------|---------|-------------| -| `openai_api_key` | `str \| None` | No | | The OpenAI API Key | - -## Sample Configuration - -```yaml -openai_api_key: ${env.OPENAI_API_KEY:=} - -``` - diff --git a/docs/source/advanced_apis/scoring/inline_llm-as-judge.md b/docs/source/advanced_apis/scoring/inline_llm-as-judge.md deleted file mode 100644 index c7fcddf37..000000000 --- a/docs/source/advanced_apis/scoring/inline_llm-as-judge.md +++ /dev/null @@ -1,17 +0,0 @@ ---- -orphan: true ---- - -# inline::llm-as-judge - -## Description - -LLM-as-judge scoring provider that uses language models to evaluate and score responses. - -## Sample Configuration - -```yaml -{} - -``` -