Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 16 additions & 0 deletions open-telemetry/signoz/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
FROM dailyco/pipecat-base:latest

# Enable bytecode compilation
ENV UV_COMPILE_BYTECODE=1

# Copy from the cache instead of linking since it's a mounted volume
ENV UV_LINK_MODE=copy

# Install the project's dependencies using the lockfile and settings
RUN --mount=type=cache,target=/root/.cache/uv \
--mount=type=bind,source=uv.lock,target=uv.lock \
--mount=type=bind,source=pyproject.toml,target=pyproject.toml \
uv sync --locked --no-install-project --no-dev

# Copy the application code
COPY ./bot.py bot.py
61 changes: 61 additions & 0 deletions open-telemetry/signoz/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
# PipeCat Monitoring with SigNoz

This demo shows [SigNoz](https://signoz.io/) observability integration with Pipecat via OpenTelemetry, allowing you to visualize traces, logs, and metrics from your Pipecat application usage.

## Setup Instructions

### Step 1: Clone this demo voice agent project and setup dependencies

```bash
git clone https://github.com/pipecat-ai/pipecat-examples.git
cd pipecat-examples/open-telemetry/signoz
uv sync
```
### Step 2: Setup Credentials

Copy .env.example to .env and filling in the required keys:

- `DEEPGRAM_API_KEY`
- `OPENAI_API_KEY`
- `CARTESIA_API_KEY`


### Step 3: Add Automatic Instrumentation

```bash
uv pip install opentelemetry-distro opentelemetry-exporter-otlp
uv run opentelemetry-bootstrap -a requirements | uv pip install --requirement -
```

### Step 4: Run your application with auto-instrumentation


```bash
OTEL_RESOURCE_ATTRIBUTES="service.name=<service_name>" \
OTEL_EXPORTER_OTLP_ENDPOINT="https://ingest.<region>.signoz.cloud:443" \
OTEL_EXPORTER_OTLP_HEADERS="signoz-ingestion-key=<your_ingestion_key>" \
OTEL_EXPORTER_OTLP_PROTOCOL=grpc \
OTEL_TRACES_EXPORTER=otlp \
OTEL_METRICS_EXPORTER=otlp \
OTEL_LOGS_EXPORTER=otlp \
OTEL_PYTHON_LOG_CORRELATION=true \
OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED=true \
<your_run_command with opentelemetry-instrument>
```

- `<service_name>` is the name of your service
- Set the `<region>` to match your SigNoz Cloud [region](https://signoz.io/docs/ingestion/signoz-cloud/overview/#endpoint)
- Replace `<your_ingestion_key>` with your SigNoz [ingestion key](https://signoz.io/docs/ingestion/signoz-cloud/keys/)
- Replace `<your_run_command>` with the actual command you would use to run your application. In this case we would use: `uv run opentelemetry-instrument python bot.py`


> Note: Using self-hosted SigNoz? Most steps are identical. To adapt this guide, update the endpoint and
remove the ingestion key header as shown in [Cloud → Self-Hosted](https://signoz.io/docs/ingestion/cloud-vs-self-hosted/#cloud-to-self-hosted).

Open http://localhost:7860 in your browser and click `Connect` to start talking to your bot.

You will now be able to see traces, logs, and metrics from your Pipecat usage in your SigNoz platform.

## References

- [SigNoz PipeCat Documentation](https://signoz.io/docs/pipecat-monitoring/)
156 changes: 156 additions & 0 deletions open-telemetry/signoz/bot.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,156 @@
#
# Copyright (c) 2024–2025, Daily
#
# SPDX-License-Identifier: BSD 2-Clause License
#

"""Pipecat Quickstart Example.

The example runs a simple voice AI bot that you can connect to using your
browser and speak with it. You can also deploy this bot to Pipecat Cloud.

Required AI services:
- Deepgram (Speech-to-Text)
- OpenAI (LLM)
- Cartesia (Text-to-Speech)

Run the bot using::

uv run bot.py
"""

import os

from dotenv import load_dotenv
from loguru import logger

print("🚀 Starting Pipecat bot...")
print("⏳ Loading models and imports (20 seconds, first run only)\n")

logger.info("Loading Local Smart Turn Analyzer V3...")
from pipecat.audio.turn.smart_turn.local_smart_turn_v3 import LocalSmartTurnAnalyzerV3

logger.info("✅ Local Smart Turn Analyzer V3 loaded")
logger.info("Loading Silero VAD model...")
from pipecat.audio.vad.silero import SileroVADAnalyzer

logger.info("✅ Silero VAD model loaded")

from pipecat.audio.vad.vad_analyzer import VADParams
from pipecat.frames.frames import LLMRunFrame

logger.info("Loading pipeline components...")
from pipecat.pipeline.pipeline import Pipeline
from pipecat.pipeline.runner import PipelineRunner
from pipecat.pipeline.task import PipelineParams, PipelineTask
from pipecat.processors.aggregators.llm_context import LLMContext
from pipecat.processors.aggregators.llm_response_universal import LLMContextAggregatorPair
from pipecat.processors.frameworks.rtvi import RTVIConfig, RTVIObserver, RTVIProcessor
from pipecat.runner.types import RunnerArguments
from pipecat.runner.utils import create_transport
from pipecat.services.cartesia.tts import CartesiaTTSService
from pipecat.services.deepgram.stt import DeepgramSTTService
from pipecat.services.openai.llm import OpenAILLMService
from pipecat.transports.base_transport import BaseTransport, TransportParams
from pipecat.transports.daily.transport import DailyParams



Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where do you set up your OTel exporter? Other examples use:

IS_TRACING_ENABLED = bool(os.getenv("ENABLE_TRACING"))

# Initialize tracing if enabled
if IS_TRACING_ENABLED:
    # Create the exporter
    otlp_exporter = OTLPSpanExporter(
        endpoint=os.getenv("OTEL_EXPORTER_OTLP_ENDPOINT", "http://localhost:4317"),
        insecure=True,
    )

    # Set up tracing with the exporter
    setup_tracing(
        service_name="pipecat-demo",
        exporter=otlp_exporter,
        console_export=bool(os.getenv("OTEL_CONSOLE_EXPORT")),
    )
    logger.info("OpenTelemetry tracing initialized")

Copy link
Author

@gkarthi-signoz gkarthi-signoz Jan 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The instructions posted are for OpenTelemetry's Python No-Code auto-instrumentation. Which allows for OpenTelemetry instrumentation, just based off env variables setup. This is why "setup_tracing" is not required as using opentelemetry-instrument when running the agent automatically sets up the tracer provider allowing for no-code setup(not having to manually set the tracer provider in code):

OTEL_RESOURCE_ATTRIBUTES="service.name=<service_name>" \ OTEL_EXPORTER_OTLP_ENDPOINT="https://ingest.<region>.signoz.cloud:443" \ OTEL_EXPORTER_OTLP_HEADERS="signoz-ingestion-key=<your_ingestion_key>" \ OTEL_EXPORTER_OTLP_PROTOCOL=grpc \ OTEL_TRACES_EXPORTER=otlp \ OTEL_METRICS_EXPORTER=otlp \ OTEL_LOGS_EXPORTER=otlp \ OTEL_PYTHON_LOG_CORRELATION=true \ OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED=true \ uv run opentelemetry-instrument python bot.py

this will automatically setup any tracer providers so when pipecat exports traces you will automatically get them in the set OTEL backend like this:
Screenshot 2026-01-13 at 7 11 01 AM

This is shown in the README.

Thanks!


logger.info("✅ All components loaded successfully!")

load_dotenv(override=True)


async def run_bot(transport: BaseTransport, runner_args: RunnerArguments):
logger.info(f"Starting bot")

stt = DeepgramSTTService(api_key=os.getenv("DEEPGRAM_API_KEY"))

tts = CartesiaTTSService(
api_key=os.getenv("CARTESIA_API_KEY"),
voice_id="71a7ad14-091c-4e8e-a314-022ece01c121", # British Reading Lady
)

llm = OpenAILLMService(api_key=os.getenv("OPENAI_API_KEY"))

messages = [
{
"role": "system",
"content": "You are a friendly AI assistant. Respond naturally and keep your answers conversational.",
},
]

context = LLMContext(messages)
context_aggregator = LLMContextAggregatorPair(context)

rtvi = RTVIProcessor(config=RTVIConfig(config=[]))

pipeline = Pipeline(
[
transport.input(), # Transport user input
rtvi, # RTVI processor
stt,
context_aggregator.user(), # User responses
llm, # LLM
tts, # TTS
transport.output(), # Transport bot output
context_aggregator.assistant(), # Assistant spoken responses
]
)

task = PipelineTask(
pipeline,
params=PipelineParams(
enable_metrics=True,
enable_usage_metrics=True,
),
enable_tracing=True, # Enable tracing for this task
enable_turn_tracking=True,
observers=[RTVIObserver(rtvi)],
)

@transport.event_handler("on_client_connected")
async def on_client_connected(transport, client):
logger.info(f"Client connected")
# Kick off the conversation.
messages.append({"role": "system", "content": "Say hello and briefly introduce yourself."})
await task.queue_frames([LLMRunFrame()])

@transport.event_handler("on_client_disconnected")
async def on_client_disconnected(transport, client):
logger.info(f"Client disconnected")
await task.cancel()

runner = PipelineRunner(handle_sigint=runner_args.handle_sigint)

await runner.run(task)


async def bot(runner_args: RunnerArguments):
"""Main bot entry point for the bot starter."""

transport_params = {
"daily": lambda: DailyParams(
audio_in_enabled=True,
audio_out_enabled=True,
vad_analyzer=SileroVADAnalyzer(params=VADParams(stop_secs=0.2)),
turn_analyzer=LocalSmartTurnAnalyzerV3(),
),
"webrtc": lambda: TransportParams(
audio_in_enabled=True,
audio_out_enabled=True,
vad_analyzer=SileroVADAnalyzer(params=VADParams(stop_secs=0.2)),
turn_analyzer=LocalSmartTurnAnalyzerV3(),
),
}

transport = await create_transport(runner_args, transport_params)

await run_bot(transport, runner_args)


if __name__ == "__main__":
from pipecat.runner.run import main

main()
6 changes: 6 additions & 0 deletions open-telemetry/signoz/env.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
DEEPGRAM_API_KEY=your_deepgram_api_key
OPENAI_API_KEY=your_openai_api_key
CARTESIA_API_KEY=your_cartesia_api_key

# Optional: Connect via Daily WebRTC locally
DAILY_API_KEY=your_daily_api_key
11 changes: 11 additions & 0 deletions open-telemetry/signoz/pcc-deploy.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
agent_name = "quickstart"
image = "your_username/quickstart:0.1"
secret_set = "quickstart-secrets"
agent_profile = "agent-1x"

# RECOMMENDED: Set an image pull secret:
# https://docs.pipecat.ai/deployment/pipecat-cloud/fundamentals/secrets#image-pull-secrets
# image_credentials = "your_image_pull_secret"

[scaling]
min_agents = 1
20 changes: 20 additions & 0 deletions open-telemetry/signoz/pyproject.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
[project]
name = "pipecat-quickstart"
version = "0.1.0"
description = "Quickstart example for building voice AI bots with Pipecat"
requires-python = ">=3.10"
dependencies = [
"pipecat-ai[webrtc,daily,silero,deepgram,openai,cartesia,local-smart-turn-v3,runner]",
"pipecat-ai-cli"
]

[dependency-groups]
dev = [
"pyright>=1.1.404,<2",
"ruff>=0.12.11,<1",
]

[tool.ruff]
line-length = 100
[tool.ruff.lint]
select = ["I"]