Skip to content

Add Elasticsearch Logging Tutorial #11761

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
250 changes: 250 additions & 0 deletions docs/my-website/docs/tutorials/elasticsearch_logging.md
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you add a screenshot of what the user can expect to see if this works?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bump on this? @colesmcintosh

Original file line number Diff line number Diff line change
@@ -0,0 +1,250 @@
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';

# Elasticsearch Logging with LiteLLM

Send your LLM requests, responses, costs, and performance data to Elasticsearch for analytics and monitoring using OpenTelemetry.

![Elasticsearch Demo](../../img/elasticsearch_demo.png)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is wrong - it should render the image here @colesmcintosh

Use the Image import like here -

<Image img={require('../../img/arize.png')} />


## Quick Start

### 1. Start Elasticsearch

```bash
# Using Docker (simplest)
docker run -d \
--name elasticsearch \
-p 9200:9200 \
-e "discovery.type=single-node" \
-e "xpack.security.enabled=false" \
docker.elastic.co/elasticsearch/elasticsearch:8.18.2
```

### 2. Set up OpenTelemetry Collector

Create an OTEL collector configuration file `otel_config.yaml`:

```yaml
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318

processors:
batch:
timeout: 1s
send_batch_size: 1024

exporters:
debug:
verbosity: detailed
otlphttp/elastic:
endpoint: "http://localhost:9200"
headers:
"Content-Type": "application/json"

service:
pipelines:
metrics:
receivers: [otlp]
exporters: [debug, otlphttp/elastic]
traces:
receivers: [otlp]
exporters: [debug, otlphttp/elastic]
logs:
receivers: [otlp]
exporters: [debug, otlphttp/elastic]
```

Start the OpenTelemetry collector:
```bash
docker run -p 4317:4317 -p 4318:4318 \
-v $(pwd)/otel_config.yaml:/etc/otel-collector-config.yaml \
otel/opentelemetry-collector:latest \
--config=/etc/otel-collector-config.yaml
```

### 3. Install OpenTelemetry Dependencies

```bash
pip install opentelemetry-api opentelemetry-sdk opentelemetry-exporter-otlp
```

### 4. Configure LiteLLM

<Tabs>
<TabItem value="proxy" label="LiteLLM Proxy">

Create a `config.yaml` file:

```yaml
model_list:
- model_name: gpt-4.1
litellm_params:
model: openai/gpt-4.1
api_key: os.environ/OPENAI_API_KEY

litellm_settings:
callbacks: ["otel"]

general_settings:
otel: true
```

Set environment variables and start the proxy:
```bash
export OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:4317"
litellm --config config.yaml
```

</TabItem>
<TabItem value="python-sdk" label="Python SDK">

Configure OpenTelemetry in your Python code:

```python
import litellm
import os

# Configure OpenTelemetry
os.environ["OTEL_EXPORTER_OTLP_ENDPOINT"] = "http://localhost:4317"

# Enable OTEL logging
litellm.callbacks = ["otel"]

# Make your LLM calls
response = litellm.completion(
model="gpt-4.1",
messages=[{"role": "user", "content": "Hello, world!"}]
)
```

</TabItem>
</Tabs>

### 5. Test the Integration

Make a test request to verify logging is working:

<Tabs>
<TabItem value="curl-proxy" label="Test Proxy">

```bash
curl -X POST "http://localhost:4000/v1/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer sk-1234" \
-d '{
"model": "gpt-4.1",
"messages": [{"role": "user", "content": "Hello from LiteLLM!"}]
}'
```

</TabItem>
<TabItem value="python-test" label="Test Python SDK">

```python
import litellm

response = litellm.completion(
model="gpt-4.1",
messages=[{"role": "user", "content": "Hello from LiteLLM!"}],
user="test-user"
)
print("Response:", response.choices[0].message.content)
```

</TabItem>
</Tabs>

### 6. Verify It's Working

```bash
# Check if traces are being created in Elasticsearch
curl "localhost:9200/_search?pretty&size=1"
```

You should see OpenTelemetry trace data with structured fields for your LLM requests.

### 7. Visualize in Kibana

Start Kibana to visualize your LLM telemetry data:

```bash
docker run -d --name kibana --link elasticsearch:elasticsearch -p 5601:5601 docker.elastic.co/kibana/kibana:8.18.2
```

Open Kibana at http://localhost:5601 and create an index pattern for your LiteLLM traces:

![Elasticsearch Demo](../../img/elasticsearch_demo.png)

## Production Setup

**With Elasticsearch Cloud:**

Update your `otel_config.yaml`:
```yaml
exporters:
otlphttp/elastic:
endpoint: "https://your-deployment.es.region.cloud.es.io"
headers:
"Authorization": "Bearer your-api-key"
"Content-Type": "application/json"
```

**Docker Compose (Full Stack):**
```yaml
# docker-compose.yml
version: '3.8'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.18.2
environment:
- discovery.type=single-node
- xpack.security.enabled=false
ports:
- "9200:9200"

otel-collector:
image: otel/opentelemetry-collector:latest
command: ["--config=/etc/otel-collector-config.yaml"]
volumes:
- ./otel_config.yaml:/etc/otel-collector-config.yaml
ports:
- "4317:4317"
- "4318:4318"
depends_on:
- elasticsearch

litellm:
image: ghcr.io/berriai/litellm:main-latest
ports:
- "4000:4000"
environment:
- OPENAI_API_KEY=${OPENAI_API_KEY}
- OTEL_EXPORTER_OTLP_ENDPOINT=http://otel-collector:4317
command: ["--config", "/app/config.yaml"]
volumes:
- ./config.yaml:/app/config.yaml
depends_on:
- otel-collector
```

**config.yaml:**
```yaml
model_list:
- model_name: gpt-4.1
litellm_params:
model: openai/gpt-4.1
api_key: os.environ/OPENAI_API_KEY

litellm_settings:
callbacks: ["otel"]

general_settings:
master_key: sk-1234
otel: true
```
Binary file added docs/my-website/img/elasticsearch_demo.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions docs/my-website/sidebars.js
Original file line number Diff line number Diff line change
Expand Up @@ -525,6 +525,7 @@ const sidebars = {
"tutorials/prompt_caching",
"tutorials/tag_management",
'tutorials/litellm_proxy_aporia',
"tutorials/elasticsearch_logging",
"tutorials/gemini_realtime_with_audio",
"tutorials/claude_responses_api",
{
Expand Down
Loading