Update default port from 5000 -> 8321

This commit is contained in:
Ashwin Bharambe 2025-01-16 15:26:48 -08:00
parent f1faa9c924
commit 03ac84a829
18 changed files with 27 additions and 27 deletions

View file

@ -139,7 +139,7 @@ Querying Traces for a agent session
The client SDK is not updated to support the new telemetry API. It will be updated soon. You can manually query traces using the following curl command:
``` bash
curl -X POST 'http://localhost:5000/alpha/telemetry/query-traces' \
curl -X POST 'http://localhost:8321/alpha/telemetry/query-traces' \
-H 'Content-Type: application/json' \
-d '{
"attribute_filters": [
@ -167,7 +167,7 @@ The client SDK is not updated to support the new telemetry API. It will be updat
Querying spans for a specifc root span id
``` bash
curl -X POST 'http://localhost:5000/alpha/telemetry/get-span-tree' \
curl -X POST 'http://localhost:8321/alpha/telemetry/get-span-tree' \
-H 'Content-Type: application/json' \
-d '{ "span_id" : "6cceb4b48a156913", "max_depth": 2 }'
@ -207,7 +207,7 @@ curl -X POST 'http://localhost:5000/alpha/telemetry/get-span-tree' \
## Example: Save Spans to Dataset
Save all spans for a specific agent session to a dataset.
``` bash
curl -X POST 'http://localhost:5000/alpha/telemetry/save-spans-to-dataset' \
curl -X POST 'http://localhost:8321/alpha/telemetry/save-spans-to-dataset' \
-H 'Content-Type: application/json' \
-d '{
"attribute_filters": [
@ -225,7 +225,7 @@ curl -X POST 'http://localhost:5000/alpha/telemetry/save-spans-to-dataset' \
Save all spans for a specific agent turn to a dataset.
```bash
curl -X POST 'http://localhost:5000/alpha/telemetry/save-spans-to-dataset' \
curl -X POST 'http://localhost:8321/alpha/telemetry/save-spans-to-dataset' \
-H 'Content-Type: application/json' \
-d '{
"attribute_filters": [

View file

@ -402,11 +402,11 @@ Serving API agents
POST /agents/step/get
POST /agents/turn/get
Listening on ['::', '0.0.0.0']:5000
Listening on ['::', '0.0.0.0']:8321
INFO: Started server process [2935911]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://['::', '0.0.0.0']:5000 (Press CTRL+C to quit)
INFO: Uvicorn running on http://['::', '0.0.0.0']:8321 (Press CTRL+C to quit)
INFO: 2401:db00:35c:2d2b:face:0:c9:0:54678 - "GET /models/list HTTP/1.1" 200 OK
```

View file

@ -27,7 +27,7 @@ If you don't want to run inference on-device, then you can connect to any hosted
```swift
import LlamaStackClient
let agents = RemoteAgents(url: URL(string: "http://localhost:5000")!)
let agents = RemoteAgents(url: URL(string: "http://localhost:8321")!)
let request = Components.Schemas.CreateAgentTurnRequest(
agent_id: agentId,
messages: [

View file

@ -41,7 +41,7 @@ The script will first start up TGI server, then start up Llama Stack distributio
INFO: Started server process [1]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://[::]:5000 (Press CTRL+C to quit)
INFO: Uvicorn running on http://[::]:8321 (Press CTRL+C to quit)
```
To kill the server
@ -65,7 +65,7 @@ registry.dell.huggingface.co/enterprise-dell-inference-meta-llama-meta-llama-3.1
#### Start Llama Stack server pointing to TGI server
```
docker run --network host -it -p 5000:5000 -v ./run.yaml:/root/my-run.yaml --gpus=all llamastack/distribution-tgi --yaml_config /root/my-run.yaml
docker run --network host -it -p 8321:8321 -v ./run.yaml:/root/my-run.yaml --gpus=all llamastack/distribution-tgi --yaml_config /root/my-run.yaml
```
Make sure in you `run.yaml` file, you inference provider is pointing to the correct TGI server endpoint. E.g.

View file

@ -23,8 +23,8 @@ subcommands:
```bash
$ llama-stack-client configure
> Enter the host name of the Llama Stack distribution server: localhost
> Enter the port number of the Llama Stack distribution server: 5000
Done! You can now use the Llama Stack Client CLI with endpoint http://localhost:5000
> Enter the port number of the Llama Stack distribution server: 8321
Done! You can now use the Llama Stack Client CLI with endpoint http://localhost:8321
```
### `llama-stack-client providers list`