mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-06-27 18:50:41 +00:00
added templates and enhanced readme (#307)
Co-authored-by: Justin Lee <justinai@fb.com>
This commit is contained in:
parent
3e1c3fdb3f
commit
b6d8246b82
5 changed files with 293 additions and 136 deletions
77
.github/ISSUE_TEMPLATE/bug.yml
vendored
Normal file
77
.github/ISSUE_TEMPLATE/bug.yml
vendored
Normal file
|
@ -0,0 +1,77 @@
|
||||||
|
name: 🐛 Bug Report
|
||||||
|
description: Create a report to help us reproduce and fix the bug
|
||||||
|
|
||||||
|
body:
|
||||||
|
- type: markdown
|
||||||
|
attributes:
|
||||||
|
value: >
|
||||||
|
#### Before submitting a bug, please make sure the issue hasn't been already addressed by searching through [the
|
||||||
|
existing and past issues](https://github.com/meta-llama/llama-stack/issues).
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
id: system-info
|
||||||
|
attributes:
|
||||||
|
label: System Info
|
||||||
|
description: |
|
||||||
|
Please share your system info with us. You can use the following command to capture your environment information
|
||||||
|
python -m "torch.utils.collect_env"
|
||||||
|
|
||||||
|
placeholder: |
|
||||||
|
PyTorch version, CUDA version, GPU type, #num of GPUs...
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: checkboxes
|
||||||
|
id: information-scripts-examples
|
||||||
|
attributes:
|
||||||
|
label: Information
|
||||||
|
description: 'The problem arises when using:'
|
||||||
|
options:
|
||||||
|
- label: "The official example scripts"
|
||||||
|
- label: "My own modified scripts"
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
id: bug-description
|
||||||
|
attributes:
|
||||||
|
label: 🐛 Describe the bug
|
||||||
|
description: |
|
||||||
|
Please provide a clear and concise description of what the bug is.
|
||||||
|
|
||||||
|
Please also paste or describe the results you observe instead of the expected results.
|
||||||
|
placeholder: |
|
||||||
|
A clear and concise description of what the bug is.
|
||||||
|
|
||||||
|
```llama stack
|
||||||
|
# Command that you used for running the examples
|
||||||
|
```
|
||||||
|
Description of the results
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
attributes:
|
||||||
|
label: Error logs
|
||||||
|
description: |
|
||||||
|
If you observe an error, please paste the error message including the **full** traceback of the exception. It may be relevant to wrap error messages in ```` ```triple quotes blocks``` ````.
|
||||||
|
|
||||||
|
placeholder: |
|
||||||
|
```
|
||||||
|
The error message you got, with the full traceback.
|
||||||
|
```
|
||||||
|
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
id: expected-behavior
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
attributes:
|
||||||
|
label: Expected behavior
|
||||||
|
description: "A clear and concise description of what you would expect to happen."
|
||||||
|
|
||||||
|
- type: markdown
|
||||||
|
attributes:
|
||||||
|
value: >
|
||||||
|
Thanks for contributing 🎉!
|
31
.github/ISSUE_TEMPLATE/feature-request.yml
vendored
Normal file
31
.github/ISSUE_TEMPLATE/feature-request.yml
vendored
Normal file
|
@ -0,0 +1,31 @@
|
||||||
|
name: 🚀 Feature request
|
||||||
|
description: Submit a proposal/request for a new llama-stack feature
|
||||||
|
|
||||||
|
body:
|
||||||
|
- type: textarea
|
||||||
|
id: feature-pitch
|
||||||
|
attributes:
|
||||||
|
label: 🚀 The feature, motivation and pitch
|
||||||
|
description: >
|
||||||
|
A clear and concise description of the feature proposal. Please outline the motivation for the proposal. Is your feature request related to a specific problem? e.g., *"I'm working on X and would like Y to be possible"*. If this is related to another GitHub issue, please link here too.
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
id: alternatives
|
||||||
|
attributes:
|
||||||
|
label: Alternatives
|
||||||
|
description: >
|
||||||
|
A description of any alternative solutions or features you've considered, if any.
|
||||||
|
|
||||||
|
- type: textarea
|
||||||
|
id: additional-context
|
||||||
|
attributes:
|
||||||
|
label: Additional context
|
||||||
|
description: >
|
||||||
|
Add any other context or screenshots about the feature request.
|
||||||
|
|
||||||
|
- type: markdown
|
||||||
|
attributes:
|
||||||
|
value: >
|
||||||
|
Thanks for contributing 🎉!
|
31
.github/PULL_REQUEST_TEMPLATE.md
vendored
Normal file
31
.github/PULL_REQUEST_TEMPLATE.md
vendored
Normal file
|
@ -0,0 +1,31 @@
|
||||||
|
# What does this PR do?
|
||||||
|
|
||||||
|
Closes # (issue)
|
||||||
|
|
||||||
|
## Feature/Issue validation/testing/test plan
|
||||||
|
|
||||||
|
Please describe the tests that you ran to verify your changes and relevant result summary. Provide instructions so it can be reproduced.
|
||||||
|
Please also list any relevant details for your test configuration or test plan.
|
||||||
|
|
||||||
|
- [ ] Test A
|
||||||
|
Logs for Test A
|
||||||
|
|
||||||
|
- [ ] Test B
|
||||||
|
Logs for Test B
|
||||||
|
|
||||||
|
|
||||||
|
## Sources
|
||||||
|
|
||||||
|
Please link relevant resources if necessary.
|
||||||
|
|
||||||
|
|
||||||
|
## Before submitting
|
||||||
|
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
|
||||||
|
- [ ] Did you read the [contributor guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
|
||||||
|
Pull Request section?
|
||||||
|
- [ ] Was this discussed/approved via a Github issue? Please add a link
|
||||||
|
to it if that's the case.
|
||||||
|
- [ ] Did you make sure to update the documentation with your changes?
|
||||||
|
- [ ] Did you write any new necessary tests?
|
||||||
|
|
||||||
|
Thanks for contributing 🎉!
|
29
README.md
29
README.md
|
@ -65,23 +65,30 @@ A Distribution is where APIs and Providers are assembled together to provide a c
|
||||||
| Dell-TGI | [Local TGI + Chroma](https://hub.docker.com/repository/docker/llamastack/llamastack-local-tgi-chroma/general) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
|
| Dell-TGI | [Local TGI + Chroma](https://hub.docker.com/repository/docker/llamastack/llamastack-local-tgi-chroma/general) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
You can install this repository as a [package](https://pypi.org/project/llama-stack/) with `pip install llama-stack`
|
You have two ways to install this repository:
|
||||||
|
|
||||||
If you want to install from source:
|
1. **Install as a package**:
|
||||||
|
You can install the repository directly from [PyPI](https://pypi.org/project/llama-stack/) by running the following command:
|
||||||
|
```bash
|
||||||
|
pip install llama-stack
|
||||||
|
```
|
||||||
|
|
||||||
```bash
|
2. **Install from source**:
|
||||||
mkdir -p ~/local
|
If you prefer to install from the source code, follow these steps:
|
||||||
cd ~/local
|
```bash
|
||||||
git clone git@github.com:meta-llama/llama-stack.git
|
mkdir -p ~/local
|
||||||
|
cd ~/local
|
||||||
|
git clone git@github.com:meta-llama/llama-stack.git
|
||||||
|
|
||||||
conda create -n stack python=3.10
|
conda create -n stack python=3.10
|
||||||
conda activate stack
|
conda activate stack
|
||||||
|
|
||||||
cd llama-stack
|
cd llama-stack
|
||||||
$CONDA_PREFIX/bin/pip install -e .
|
$CONDA_PREFIX/bin/pip install -e .
|
||||||
```
|
```
|
||||||
|
|
||||||
## Documentations
|
## Documentations
|
||||||
|
|
||||||
|
|
|
@ -5,163 +5,174 @@ This guide will walk you though the steps to get started on end-to-end flow for
|
||||||
## Installation
|
## Installation
|
||||||
The `llama` CLI tool helps you setup and use the Llama toolchain & agentic systems. It should be available on your path after installing the `llama-stack` package.
|
The `llama` CLI tool helps you setup and use the Llama toolchain & agentic systems. It should be available on your path after installing the `llama-stack` package.
|
||||||
|
|
||||||
You can install this repository as a [package](https://pypi.org/project/llama-stack/) with `pip install llama-stack`
|
You have two ways to install this repository:
|
||||||
|
|
||||||
If you want to install from source:
|
1. **Install as a package**:
|
||||||
|
You can install the repository directly from [PyPI](https://pypi.org/project/llama-stack/) by running the following command:
|
||||||
|
```bash
|
||||||
|
pip install llama-stack
|
||||||
|
```
|
||||||
|
|
||||||
```bash
|
2. **Install from source**:
|
||||||
mkdir -p ~/local
|
If you prefer to install from the source code, follow these steps:
|
||||||
cd ~/local
|
```bash
|
||||||
git clone git@github.com:meta-llama/llama-stack.git
|
mkdir -p ~/local
|
||||||
|
cd ~/local
|
||||||
|
git clone git@github.com:meta-llama/llama-stack.git
|
||||||
|
|
||||||
conda create -n stack python=3.10
|
conda create -n stack python=3.10
|
||||||
conda activate stack
|
conda activate stack
|
||||||
|
|
||||||
cd llama-stack
|
cd llama-stack
|
||||||
$CONDA_PREFIX/bin/pip install -e .
|
$CONDA_PREFIX/bin/pip install -e .
|
||||||
```
|
```
|
||||||
|
|
||||||
For what you can do with the Llama CLI, please refer to [CLI Reference](./cli_reference.md).
|
For what you can do with the Llama CLI, please refer to [CLI Reference](./cli_reference.md).
|
||||||
|
|
||||||
## Starting Up Llama Stack Server
|
## Starting Up Llama Stack Server
|
||||||
#### Starting up server via docker
|
|
||||||
|
|
||||||
We provide 2 pre-built Docker image of Llama Stack distribution, which can be found in the following links.
|
You have two ways to start up Llama stack server:
|
||||||
- [llamastack-local-gpu](https://hub.docker.com/repository/docker/llamastack/llamastack-local-gpu/general)
|
|
||||||
- This is a packaged version with our local meta-reference implementations, where you will be running inference locally with downloaded Llama model checkpoints.
|
|
||||||
- [llamastack-local-cpu](https://hub.docker.com/repository/docker/llamastack/llamastack-local-cpu/general)
|
|
||||||
- This is a lite version with remote inference where you can hook up to your favourite remote inference framework (e.g. ollama, fireworks, together, tgi) for running inference without GPU.
|
|
||||||
|
|
||||||
> [!NOTE]
|
1. **Starting up server via docker**:
|
||||||
> For GPU inference, you need to set these environment variables for specifying local directory containing your model checkpoints, and enable GPU inference to start running docker container.
|
|
||||||
```
|
|
||||||
export LLAMA_CHECKPOINT_DIR=~/.llama
|
|
||||||
```
|
|
||||||
|
|
||||||
> [!NOTE]
|
We provide 2 pre-built Docker image of Llama Stack distribution, which can be found in the following links.
|
||||||
> `~/.llama` should be the path containing downloaded weights of Llama models.
|
- [llamastack-local-gpu](https://hub.docker.com/repository/docker/llamastack/llamastack-local-gpu/general)
|
||||||
|
- This is a packaged version with our local meta-reference implementations, where you will be running inference locally with downloaded Llama model checkpoints.
|
||||||
|
- [llamastack-local-cpu](https://hub.docker.com/repository/docker/llamastack/llamastack-local-cpu/general)
|
||||||
|
- This is a lite version with remote inference where you can hook up to your favourite remote inference framework (e.g. ollama, fireworks, together, tgi) for running inference without GPU.
|
||||||
|
|
||||||
To download llama models, use
|
> [!NOTE]
|
||||||
```
|
> For GPU inference, you need to set these environment variables for specifying local directory containing your model checkpoints, and enable GPU inference to start running docker container.
|
||||||
llama download --model-id Llama3.1-8B-Instruct
|
```
|
||||||
```
|
export LLAMA_CHECKPOINT_DIR=~/.llama
|
||||||
|
```
|
||||||
|
|
||||||
To download and start running a pre-built docker container, you may use the following commands:
|
> [!NOTE]
|
||||||
|
> `~/.llama` should be the path containing downloaded weights of Llama models.
|
||||||
|
|
||||||
```
|
To download llama models, use
|
||||||
docker run -it -p 5000:5000 -v ~/.llama:/root/.llama --gpus=all llamastack/llamastack-local-gpu
|
```
|
||||||
```
|
llama download --model-id Llama3.1-8B-Instruct
|
||||||
|
```
|
||||||
|
|
||||||
> [!TIP]
|
To download and start running a pre-built docker container, you may use the following commands:
|
||||||
> Pro Tip: We may use `docker compose up` for starting up a distribution with remote providers (e.g. TGI) using [llamastack-local-cpu](https://hub.docker.com/repository/docker/llamastack/llamastack-local-cpu/general). You can checkout [these scripts](../distributions/) to help you get started.
|
|
||||||
|
|
||||||
#### Build->Configure->Run Llama Stack server via conda
|
```
|
||||||
You may also build a LlamaStack distribution from scratch, configure it, and start running the distribution. This is useful for developing on LlamaStack.
|
docker run -it -p 5000:5000 -v ~/.llama:/root/.llama --gpus=all llamastack/llamastack-local-gpu
|
||||||
|
```
|
||||||
|
|
||||||
**`llama stack build`**
|
> [!TIP]
|
||||||
- You'll be prompted to enter build information interactively.
|
> Pro Tip: We may use `docker compose up` for starting up a distribution with remote providers (e.g. TGI) using [llamastack-local-cpu](https://hub.docker.com/repository/docker/llamastack/llamastack-local-cpu/general). You can checkout [these scripts](../distributions/) to help you get started.
|
||||||
```
|
|
||||||
llama stack build
|
|
||||||
|
|
||||||
> Enter an unique name for identifying your Llama Stack build distribution (e.g. my-local-stack): my-local-stack
|
|
||||||
> Enter the image type you want your distribution to be built with (docker or conda): conda
|
|
||||||
|
|
||||||
Llama Stack is composed of several APIs working together. Let's configure the providers (implementations) you want to use for these APIs.
|
2. **Build->Configure->Run Llama Stack server via conda**:
|
||||||
> Enter the API provider for the inference API: (default=meta-reference): meta-reference
|
|
||||||
> Enter the API provider for the safety API: (default=meta-reference): meta-reference
|
|
||||||
> Enter the API provider for the agents API: (default=meta-reference): meta-reference
|
|
||||||
> Enter the API provider for the memory API: (default=meta-reference): meta-reference
|
|
||||||
> Enter the API provider for the telemetry API: (default=meta-reference): meta-reference
|
|
||||||
|
|
||||||
> (Optional) Enter a short description for your Llama Stack distribution:
|
You may also build a LlamaStack distribution from scratch, configure it, and start running the distribution. This is useful for developing on LlamaStack.
|
||||||
|
|
||||||
Build spec configuration saved at ~/.conda/envs/llamastack-my-local-stack/my-local-stack-build.yaml
|
**`llama stack build`**
|
||||||
You can now run `llama stack configure my-local-stack`
|
- You'll be prompted to enter build information interactively.
|
||||||
```
|
```
|
||||||
|
llama stack build
|
||||||
|
|
||||||
**`llama stack configure`**
|
> Enter an unique name for identifying your Llama Stack build distribution (e.g. my-local-stack): my-local-stack
|
||||||
- Run `llama stack configure <name>` with the name you have previously defined in `build` step.
|
> Enter the image type you want your distribution to be built with (docker or conda): conda
|
||||||
```
|
|
||||||
llama stack configure <name>
|
|
||||||
```
|
|
||||||
- You will be prompted to enter configurations for your Llama Stack
|
|
||||||
|
|
||||||
```
|
Llama Stack is composed of several APIs working together. Let's configure the providers (implementations) you want to use for these APIs.
|
||||||
$ llama stack configure my-local-stack
|
> Enter the API provider for the inference API: (default=meta-reference): meta-reference
|
||||||
|
> Enter the API provider for the safety API: (default=meta-reference): meta-reference
|
||||||
|
> Enter the API provider for the agents API: (default=meta-reference): meta-reference
|
||||||
|
> Enter the API provider for the memory API: (default=meta-reference): meta-reference
|
||||||
|
> Enter the API provider for the telemetry API: (default=meta-reference): meta-reference
|
||||||
|
|
||||||
Could not find my-local-stack. Trying conda build name instead...
|
> (Optional) Enter a short description for your Llama Stack distribution:
|
||||||
Configuring API `inference`...
|
|
||||||
=== Configuring provider `meta-reference` for API inference...
|
|
||||||
Enter value for model (default: Llama3.1-8B-Instruct) (required):
|
|
||||||
Do you want to configure quantization? (y/n): n
|
|
||||||
Enter value for torch_seed (optional):
|
|
||||||
Enter value for max_seq_len (default: 4096) (required):
|
|
||||||
Enter value for max_batch_size (default: 1) (required):
|
|
||||||
|
|
||||||
Configuring API `safety`...
|
Build spec configuration saved at ~/.conda/envs/llamastack-my-local-stack/my-local-stack-build.yaml
|
||||||
=== Configuring provider `meta-reference` for API safety...
|
You can now run `llama stack configure my-local-stack`
|
||||||
Do you want to configure llama_guard_shield? (y/n): n
|
```
|
||||||
Do you want to configure prompt_guard_shield? (y/n): n
|
|
||||||
|
|
||||||
Configuring API `agents`...
|
**`llama stack configure`**
|
||||||
=== Configuring provider `meta-reference` for API agents...
|
- Run `llama stack configure <name>` with the name you have previously defined in `build` step.
|
||||||
Enter `type` for persistence_store (options: redis, sqlite, postgres) (default: sqlite):
|
```
|
||||||
|
llama stack configure <name>
|
||||||
|
```
|
||||||
|
- You will be prompted to enter configurations for your Llama Stack
|
||||||
|
|
||||||
Configuring SqliteKVStoreConfig:
|
```
|
||||||
Enter value for namespace (optional):
|
$ llama stack configure my-local-stack
|
||||||
Enter value for db_path (default: /home/xiyan/.llama/runtime/kvstore.db) (required):
|
|
||||||
|
|
||||||
Configuring API `memory`...
|
Could not find my-local-stack. Trying conda build name instead...
|
||||||
=== Configuring provider `meta-reference` for API memory...
|
Configuring API `inference`...
|
||||||
> Please enter the supported memory bank type your provider has for memory: vector
|
=== Configuring provider `meta-reference` for API inference...
|
||||||
|
Enter value for model (default: Llama3.1-8B-Instruct) (required):
|
||||||
|
Do you want to configure quantization? (y/n): n
|
||||||
|
Enter value for torch_seed (optional):
|
||||||
|
Enter value for max_seq_len (default: 4096) (required):
|
||||||
|
Enter value for max_batch_size (default: 1) (required):
|
||||||
|
|
||||||
Configuring API `telemetry`...
|
Configuring API `safety`...
|
||||||
=== Configuring provider `meta-reference` for API telemetry...
|
=== Configuring provider `meta-reference` for API safety...
|
||||||
|
Do you want to configure llama_guard_shield? (y/n): n
|
||||||
|
Do you want to configure prompt_guard_shield? (y/n): n
|
||||||
|
|
||||||
> YAML configuration has been written to ~/.llama/builds/conda/my-local-stack-run.yaml.
|
Configuring API `agents`...
|
||||||
You can now run `llama stack run my-local-stack --port PORT`
|
=== Configuring provider `meta-reference` for API agents...
|
||||||
```
|
Enter `type` for persistence_store (options: redis, sqlite, postgres) (default: sqlite):
|
||||||
|
|
||||||
**`llama stack run`**
|
Configuring SqliteKVStoreConfig:
|
||||||
- Run `llama stack run <name>` with the name you have previously defined.
|
Enter value for namespace (optional):
|
||||||
```
|
Enter value for db_path (default: /home/xiyan/.llama/runtime/kvstore.db) (required):
|
||||||
llama stack run my-local-stack
|
|
||||||
|
|
||||||
...
|
Configuring API `memory`...
|
||||||
> initializing model parallel with size 1
|
=== Configuring provider `meta-reference` for API memory...
|
||||||
> initializing ddp with size 1
|
> Please enter the supported memory bank type your provider has for memory: vector
|
||||||
> initializing pipeline with size 1
|
|
||||||
...
|
Configuring API `telemetry`...
|
||||||
Finished model load YES READY
|
=== Configuring provider `meta-reference` for API telemetry...
|
||||||
Serving POST /inference/chat_completion
|
|
||||||
Serving POST /inference/completion
|
> YAML configuration has been written to ~/.llama/builds/conda/my-local-stack-run.yaml.
|
||||||
Serving POST /inference/embeddings
|
You can now run `llama stack run my-local-stack --port PORT`
|
||||||
Serving POST /memory_banks/create
|
```
|
||||||
Serving DELETE /memory_bank/documents/delete
|
|
||||||
Serving DELETE /memory_banks/drop
|
**`llama stack run`**
|
||||||
Serving GET /memory_bank/documents/get
|
- Run `llama stack run <name>` with the name you have previously defined.
|
||||||
Serving GET /memory_banks/get
|
```
|
||||||
Serving POST /memory_bank/insert
|
llama stack run my-local-stack
|
||||||
Serving GET /memory_banks/list
|
|
||||||
Serving POST /memory_bank/query
|
...
|
||||||
Serving POST /memory_bank/update
|
> initializing model parallel with size 1
|
||||||
Serving POST /safety/run_shield
|
> initializing ddp with size 1
|
||||||
Serving POST /agentic_system/create
|
> initializing pipeline with size 1
|
||||||
Serving POST /agentic_system/session/create
|
...
|
||||||
Serving POST /agentic_system/turn/create
|
Finished model load YES READY
|
||||||
Serving POST /agentic_system/delete
|
Serving POST /inference/chat_completion
|
||||||
Serving POST /agentic_system/session/delete
|
Serving POST /inference/completion
|
||||||
Serving POST /agentic_system/session/get
|
Serving POST /inference/embeddings
|
||||||
Serving POST /agentic_system/step/get
|
Serving POST /memory_banks/create
|
||||||
Serving POST /agentic_system/turn/get
|
Serving DELETE /memory_bank/documents/delete
|
||||||
Serving GET /telemetry/get_trace
|
Serving DELETE /memory_banks/drop
|
||||||
Serving POST /telemetry/log_event
|
Serving GET /memory_bank/documents/get
|
||||||
Listening on :::5000
|
Serving GET /memory_banks/get
|
||||||
INFO: Started server process [587053]
|
Serving POST /memory_bank/insert
|
||||||
INFO: Waiting for application startup.
|
Serving GET /memory_banks/list
|
||||||
INFO: Application startup complete.
|
Serving POST /memory_bank/query
|
||||||
INFO: Uvicorn running on http://[::]:5000 (Press CTRL+C to quit)
|
Serving POST /memory_bank/update
|
||||||
```
|
Serving POST /safety/run_shield
|
||||||
|
Serving POST /agentic_system/create
|
||||||
|
Serving POST /agentic_system/session/create
|
||||||
|
Serving POST /agentic_system/turn/create
|
||||||
|
Serving POST /agentic_system/delete
|
||||||
|
Serving POST /agentic_system/session/delete
|
||||||
|
Serving POST /agentic_system/session/get
|
||||||
|
Serving POST /agentic_system/step/get
|
||||||
|
Serving POST /agentic_system/turn/get
|
||||||
|
Serving GET /telemetry/get_trace
|
||||||
|
Serving POST /telemetry/log_event
|
||||||
|
Listening on :::5000
|
||||||
|
INFO: Started server process [587053]
|
||||||
|
INFO: Waiting for application startup.
|
||||||
|
INFO: Application startup complete.
|
||||||
|
INFO: Uvicorn running on http://[::]:5000 (Press CTRL+C to quit)
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
## Testing with client
|
## Testing with client
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue