Skip to content

Commit 95120cd

Browse files
authored
fix: update mlflow port to 5001 (#5644)
# What does this PR do? <!-- Provide a short summary of what this PR does and why. Link to relevant issues if applicable. --> ```console % lsof -i :5000 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME ControlCe 17228 gyliu-cary 14u IPv4 0x9e69880557ab4c98 0t0 TCP *:commplex-main (LISTEN) ControlCe 17228 gyliu-cary 15u IPv6 0x4061eb5123ba5a44 0t0 TCP *:commplex-main (LISTEN) ``` Seems 5000 is used by ControlCenter, which is a macOS system process. When end user run telemetry test, there will always be some errors of port in use and mlflow cannot start. Update default port of mlflow to 5001. <!-- If resolving an issue, uncomment and update the line below --> <!-- Closes #[issue-number] --> ## Test Plan <!-- Describe the tests you ran to verify your changes with result summaries. *Provide clear instructions so the plan can be easily re-executed.* --> <!-- For API changes, include: 1. A testing script (Python, curl, etc.) that exercises the new/modified endpoints 2. The output from running your script Example: ```python ... ... ``` Output: ``` <paste actual output here> ``` -->
1 parent fade64f commit 95120cd

2 files changed

Lines changed: 10 additions & 10 deletions

File tree

scripts/telemetry/README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ This directory contains configuration files and a setup script to deploy a full
2727
| **Jaeger** | Distributed tracing UI | 16686 |
2828
| **Prometheus** | Metrics storage and querying | 9090 |
2929
| **Grafana** | Dashboards and visualization | 3000 |
30-
| **MLflow** | Trace ingest via OTLP `/v1/traces` (container in this stack) | 5000 |
30+
| **MLflow** | Trace ingest via OTLP `/v1/traces` (container in this stack) | 5001 |
3131

3232
## Pre-requisites
3333

@@ -69,9 +69,9 @@ This will:
6969

7070
> **MLflow traces**
7171
>
72-
> - MLflow is now started as a container in this stack (`mlflow:5000`), OTLP endpoint `/v1/traces`.
73-
> - Collector exporter `otlphttp/mlflow` points to `http://mlflow:5000/v1/traces`, header `x-mlflow-experiment-id: "1"`. If you need auth, set `MLFLOW_OTEL_HEADERS` (e.g., `Authorization=Bearer <token>`) before running the setup script.
74-
> - If you prefer an external MLflow, override `MLFLOW_OTEL_ENDPOINT` before running the script (e.g., `http://host.docker.internal:5000`).
72+
> - MLflow is now started as a container in this stack (`mlflow:5001`), OTLP endpoint `/v1/traces`.
73+
> - Collector exporter `otlphttp/mlflow` points to `http://mlflow:5001/v1/traces`, header `x-mlflow-experiment-id: "1"`. If you need auth, set `MLFLOW_OTEL_HEADERS` (e.g., `Authorization=Bearer <token>`) before running the setup script.
74+
> - If you prefer an external MLflow, override `MLFLOW_OTEL_ENDPOINT` before running the script (e.g., `http://host.docker.internal:5001`).
7575
7676
### Install OpenTelemetry instrumentation For OGX Server and Client
7777

@@ -131,7 +131,7 @@ Open the following UIs in your browser:
131131

132132
| Service | URL | Credentials |
133133
|---|---|---|
134-
| **Mlflow** (traces) | [http://localhost:5000](http://localhost:5000) | N/A |
134+
| **Mlflow** (traces) | [http://localhost:5001](http://localhost:5001) | N/A |
135135
| **Jaeger** (traces) | [http://localhost:16686](http://localhost:16686) | N/A |
136136
| **Prometheus** (metrics) | [http://localhost:9090](http://localhost:9090) | N/A |
137137
| **Grafana** (dashboards) | [http://localhost:3000](http://localhost:3000) | admin / admin |

scripts/telemetry/setup_telemetry.sh

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -118,15 +118,15 @@ $CONTAINER_RUNTIME run -d --name jaeger \
118118
echo "📒 Starting MLflow..."
119119
$CONTAINER_RUNTIME run -d --name mlflow \
120120
--network llama-telemetry \
121-
-p 5000:5000 \
121+
-p 5001:5001 \
122122
-v "$MLFLOW_BACKEND_STORE:/mlflow/mlflow.db" \
123123
-v "$MLFLOW_ARTIFACT_ROOT:/mlflow/artifacts" \
124124
ghcr.io/mlflow/mlflow:latest \
125125
mlflow server \
126126
--backend-store-uri sqlite:////mlflow/mlflow.db \
127127
--default-artifact-root /mlflow/artifacts \
128-
--host 0.0.0.0 --port 5000 \
129-
--allowed-hosts localhost,localhost:5000,127.0.0.1,127.0.0.1:5000,host.docker.internal,host.docker.internal:5000
128+
--host 0.0.0.0 --port 5001 \
129+
--allowed-hosts localhost,localhost:5001,127.0.0.1,127.0.0.1:5001,host.docker.internal,host.docker.internal:5001
130130

131131
# Add host aliases so the Collector can reach host services (e.g., MLflow)
132132
ADD_HOST_OPT=""
@@ -195,7 +195,7 @@ echo ""
195195
echo "✅ Telemetry stack is ready!"
196196
echo ""
197197
echo "🌐 Service URLs:"
198-
echo " MLflow: http://localhost:5000"
198+
echo " MLflow: http://localhost:5001"
199199
echo " Jaeger UI: http://localhost:16686"
200200
echo " Prometheus: http://localhost:9090"
201201
echo " Grafana: http://localhost:3000 (admin/admin)"
@@ -215,7 +215,7 @@ echo " 5. Check Prometheus for metrics: http://localhost:9090"
215215
echo " 6. Set up Grafana dashboards: http://localhost:3000"
216216
echo ""
217217
echo "🔍 To test the setup, run:"
218-
echo " curl -X POST http://localhost:5000/v1/inference/chat/completions \\"
218+
echo " curl -X POST http://localhost:5001/v1/inference/chat/completions \\"
219219
echo " -H 'Content-Type: application/json' \\"
220220
echo " -d '{\"model_id\": \"your-model\", \"messages\": [{\"role\": \"user\", \"content\": \"Hello\"}]}'"
221221
echo ""

0 commit comments

Comments
 (0)