signoz
Use this skill when working with SigNoz - open-source observability platform for application monitoring, distributed tracing, log management, metrics, alerts, and dashboards. Triggers on SigNoz setup, OpenTelemetry instrumentation for SigNoz, sending traces/logs/metrics to SigNoz, creating SigNoz dashboards, configuring SigNoz alerts, exception monitoring, and migrating from Datadog/Grafana/New Relic to SigNoz.
monitoring signozobservabilityopentelemetrytracinglogsmetricsWhat is signoz?
Use this skill when working with SigNoz - open-source observability platform for application monitoring, distributed tracing, log management, metrics, alerts, and dashboards. Triggers on SigNoz setup, OpenTelemetry instrumentation for SigNoz, sending traces/logs/metrics to SigNoz, creating SigNoz dashboards, configuring SigNoz alerts, exception monitoring, and migrating from Datadog/Grafana/New Relic to SigNoz.
signoz
signoz is a production-ready AI agent skill for claude-code, gemini-cli, openai-codex, and 1 more. Working with SigNoz - open-source observability platform for application monitoring, distributed tracing, log management, metrics, alerts, and dashboards.
Quick Facts
| Field | Value |
|---|---|
| Category | monitoring |
| Version | 0.1.0 |
| Platforms | claude-code, gemini-cli, openai-codex, mcp |
| License | MIT |
How to Install
- Make sure you have Node.js installed on your machine.
- Run the following command in your terminal:
npx skills add AbsolutelySkilled/AbsolutelySkilled --skill signoz- The signoz skill is now available in your AI coding agent (Claude Code, Gemini CLI, OpenAI Codex, etc.).
Overview
SigNoz is an open-source observability platform that unifies traces, metrics, and logs in a single backend powered by ClickHouse. Built natively on OpenTelemetry, it provides APM dashboards, distributed tracing with flamegraphs, log management with pipelines, custom metrics, alerting across all signals, and exception monitoring - all without vendor lock-in. SigNoz is available as a managed cloud service or self-hosted via Docker or Kubernetes.
Tags
signoz observability opentelemetry tracing logs metrics
Platforms
- claude-code
- gemini-cli
- openai-codex
- mcp
Related Skills
Pair signoz with these complementary skills:
Frequently Asked Questions
What is signoz?
Use this skill when working with SigNoz - open-source observability platform for application monitoring, distributed tracing, log management, metrics, alerts, and dashboards. Triggers on SigNoz setup, OpenTelemetry instrumentation for SigNoz, sending traces/logs/metrics to SigNoz, creating SigNoz dashboards, configuring SigNoz alerts, exception monitoring, and migrating from Datadog/Grafana/New Relic to SigNoz.
How do I install signoz?
Run npx skills add AbsolutelySkilled/AbsolutelySkilled --skill signoz in your terminal. The skill will be immediately available in your AI coding agent.
What AI agents support signoz?
This skill works with claude-code, gemini-cli, openai-codex, mcp. Install it once and use it across any supported AI coding agent.
Maintainers
Generated from AbsolutelySkilled
SKILL.md
SigNoz
SigNoz is an open-source observability platform that unifies traces, metrics, and logs in a single backend powered by ClickHouse. Built natively on OpenTelemetry, it provides APM dashboards, distributed tracing with flamegraphs, log management with pipelines, custom metrics, alerting across all signals, and exception monitoring - all without vendor lock-in. SigNoz is available as a managed cloud service or self-hosted via Docker or Kubernetes.
When to use this skill
Trigger this skill when the user:
- Wants to set up or configure SigNoz (cloud or self-hosted)
- Needs to instrument an application to send traces, logs, or metrics to SigNoz
- Asks about OpenTelemetry Collector configuration for SigNoz
- Wants to create dashboards, panels, or visualizations in SigNoz
- Needs to configure alerts (metric, log, trace, or anomaly-based) in SigNoz
- Asks about SigNoz query builder syntax, aggregations, or filters
- Wants to monitor exceptions or correlate traces with logs in SigNoz
- Is migrating from Datadog, Grafana, New Relic, or ELK to SigNoz
Do NOT trigger this skill for:
- General observability concepts without SigNoz context (use the
observabilityskill) - OpenTelemetry instrumentation not targeting SigNoz as the backend
Setup & authentication
SigNoz Cloud
Sign up at https://signoz.io/teams/ to get a cloud instance. You will receive:
- A region endpoint (e.g.
ingest.us.signoz.cloud:443) - A SIGNOZ_INGESTION_KEY for authenticating data
Self-hosted deployment
# Docker Standalone (quickest for local/dev)
git clone -b main https://github.com/SigNoz/signoz.git && cd signoz/deploy/
docker compose -f docker/clickhouse-setup/docker-compose.yaml up -d
# Kubernetes via Helm
helm repo add signoz https://charts.signoz.io
helm install my-release signoz/signozSelf-hosted supports Docker Standalone, Docker Swarm, Kubernetes (AWS/GCP/Azure/ DigitalOcean/OpenShift), and native Linux installation.
Environment variables
# For cloud - set these in your OTel Collector or SDK exporter config
SIGNOZ_INGESTION_KEY=your-ingestion-key
OTEL_EXPORTER_OTLP_ENDPOINT=https://ingest.<region>.signoz.cloud:443
OTEL_EXPORTER_OTLP_HEADERS=signoz-ingestion-key=<your-ingestion-key>Core concepts
SigNoz uses OpenTelemetry as its sole data ingestion layer. All telemetry (traces, metrics, logs) flows through an OTel Collector which receives data via OTLP (gRPC on port 4317, HTTP on 4318), processes it with batching and resource detection, and exports it to SigNoz's ClickHouse storage backend.
The data model has three pillars:
- Traces - Distributed request flows visualized as flamegraphs and Gantt charts. Each trace contains spans with attributes, events, and status codes.
- Metrics - Time-series data from application instrumentation (p99 latency, error rates, Apdex) and infrastructure (CPU, memory, disk, network via hostmetrics receiver).
- Logs - Structured log records ingested via OTel SDKs, FluentBit, Logstash, or file-based collection. Processed through log pipelines for parsing and enrichment.
All three signals correlate - traces link to logs via trace IDs, and exceptions embed in spans. The Query Builder provides a unified interface for filtering, aggregating, and visualizing across all signal types.
Common tasks
Instrument a Node.js app
npm install @opentelemetry/api \
@opentelemetry/sdk-node \
@opentelemetry/auto-instrumentations-node \
@opentelemetry/exporter-trace-otlp-grpcconst { NodeSDK } = require("@opentelemetry/sdk-node");
const { getNodeAutoInstrumentations } = require("@opentelemetry/auto-instrumentations-node");
const { OTLPTraceExporter } = require("@opentelemetry/exporter-trace-otlp-grpc");
const sdk = new NodeSDK({
traceExporter: new OTLPTraceExporter({
url: process.env.OTEL_EXPORTER_OTLP_ENDPOINT || "http://localhost:4317",
}),
instrumentations: [getNodeAutoInstrumentations()],
});
sdk.start();Supported languages: Java, Python, Go, .NET, Ruby, PHP, Rust, Elixir, C++, Deno, Swift, plus mobile (React Native, Android, iOS, Flutter) and frontend.
Configure the OTel Collector for SigNoz
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
hostmetrics:
collection_interval: 60s
scrapers:
cpu: {}
memory: {}
disk: {}
load: {}
network: {}
filesystem: {}
processors:
batch:
send_batch_size: 1000
timeout: 10s
resourcedetection:
detectors: [env, system]
system:
hostname_sources: [os]
exporters:
otlp:
endpoint: "ingest.<region>.signoz.cloud:443"
tls:
insecure: false
headers:
signoz-ingestion-key: "${SIGNOZ_INGESTION_KEY}"
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch, resourcedetection]
exporters: [otlp]
metrics:
receivers: [otlp, hostmetrics]
processors: [batch, resourcedetection]
exporters: [otlp]
logs:
receivers: [otlp]
processors: [batch, resourcedetection]
exporters: [otlp]For self-hosted, replace the endpoint with your SigNoz instance URL and remove the
headerssection.
Send logs to SigNoz
Three approaches:
- OTel SDK - Instrument application code directly with OpenTelemetry logging SDK
- File-based - Use FluentBit or Logstash to tail log files and forward via OTLP
- Stdout/collector - Pipe container stdout to the OTel Collector's filelog receiver
# FluentBit output to SigNoz via OTLP
[OUTPUT]
Name opentelemetry
Match *
Host ingest.<region>.signoz.cloud
Port 443
Header signoz-ingestion-key <your-key>
Tls On
Tls.verify OnLog pipelines in SigNoz can parse, transform, enrich, drop unwanted logs, and scrub PII before storage.
Create dashboards and panels
Navigate to Dashboards > New Dashboard. Add panels using the Query Builder:
- Select signal type (metrics, logs, or traces)
- Add filters (e.g.
service.name = my-app) - Choose aggregation (Count, Avg, P99, Rate, etc.)
- Group by attributes (e.g.
method,status_code) - Set visualization type (time series, bar, pie chart, table)
Use {{attributeName}} in legend format for dynamic labels. Multiple queries
can be combined with mathematical functions (log, sqrt, exp, time shift).
SigNoz provides pre-built dashboard JSON templates on GitHub that can be imported.
Configure alerts
SigNoz supports six alert types:
- Metrics-based - threshold on any metric
- Log-based - patterns, counts, or attribute values
- Trace-based - latency or error rate thresholds
- Anomaly-based - automatic anomaly detection
- Exceptions-based - exception count or type thresholds
- Apdex alerts - application performance index
Notification channels include Slack, PagerDuty, email, and webhooks. Alerts support routing policies and planned maintenance windows. A Terraform provider is available for infrastructure-as-code alert management.
Monitor exceptions
Exceptions are auto-recorded for Python, Java, Ruby, and JavaScript. For other languages, record manually:
from opentelemetry import trace
tracer = trace.get_tracer(__name__)
with tracer.start_as_current_span("operation") as span:
try:
risky_operation()
except Exception as ex:
span.record_exception(ex)
span.set_status(trace.StatusCode.ERROR, str(ex))
raiseExceptions group by service name, type, and message. Enable
low_cardinal_exception_grouping in the clickhousetraces exporter to group
only by service and type (reduces high cardinality from dynamic messages).
Query with the Query Builder
# Filter: service.name = demo-app AND severity_text = ERROR
# Aggregation: Count
# Group by: status_code
# Aggregate every: 60s
# Order by: timestamp DESC
# Limit: 100Supported aggregations: Count, Count Distinct, Sum, Avg, Min, Max, P05-P99,
Rate, Rate Sum, Rate Avg, Rate Min, Rate Max. Filters use =, !=, IN,
NOT_IN operators combined with AND logic.
Advanced functions: EWMA smoothing (3/5/7 periods), time shift comparison, cut-off min/max thresholds, and chained function application.
Gotchas
OTel SDK must be initialized before any other imports - If application code imports a DB driver, HTTP client, or framework before the OTel SDK is initialized, those libraries will not be auto-instrumented. In Node.js, use
--require ./instrument.jsto load the SDK before the app. In Python, callsentry_sdk.init()(or the OTel equivalent) at the top of the entry point.gRPC (4317) is blocked by many cloud firewalls by default - Outbound gRPC traffic on port 4317 is frequently blocked by corporate firewalls and cloud security groups. If traces are not arriving, switch the exporter to OTLP/HTTP on port 4318 (
OTLPTraceExporterwithhttp://URL) as a first debug step.Missing
service.nameattribute makes all data unidentifiable - IfOTEL_SERVICE_NAMEis not set and the SDK is not explicitly configured with a service name, all telemetry arrives in SigNoz grouped under a generic name orunknown_service. SetOTEL_SERVICE_NAMEin your environment or SDK config before deploying.Self-hosted ClickHouse storage fills up silently - SigNoz self-hosted deployments do not have built-in disk alerting. ClickHouse will fill available disk and stop accepting writes without warning. Configure a disk utilization alert on the host and set a data retention policy in SigNoz settings (default is 15 days for traces).
High-cardinality span attributes break dashboards - Adding user IDs, request IDs, or raw query strings as span attribute keys (not values) creates unbounded cardinality in ClickHouse and makes dashboards unusable. Cardinality should live in attribute values, not keys. Use a fixed set of keys like
user.id,request.idwith variable values.
Error handling
| Error | Cause | Resolution |
|---|---|---|
| No data in SigNoz after setup | OTel Collector not reaching SigNoz endpoint | Add a debug exporter to the collector config to verify telemetry is received locally; check endpoint URL and ingestion key |
| Port 4317/4318 already in use | Another process bound to OTLP ports | Stop conflicting process or change collector receiver ports |
context deadline exceeded |
Network/firewall blocking gRPC to SigNoz cloud | Verify outbound 443 is open; check TLS settings in exporter config |
| High cardinality exceptions | Dynamic exception messages creating too many groups | Enable low_cardinal_exception_grouping in clickhousetraces exporter |
| Missing host metrics | hostmetrics receiver not configured or Docker volume not mounted | Add hostmetrics receiver with scrapers; set root_path: /hostfs for Docker deployments |
References
For detailed content on specific sub-domains, read the relevant file from the
references/ folder:
references/instrumentation.md- Language-specific instrumentation guides and setup patterns (read when instrumenting a specific language)references/otel-collector.md- Advanced OTel Collector configuration, receivers, processors, and exporters (read when customizing the collector pipeline)references/query-builder.md- Full query builder syntax, aggregation functions, and advanced analysis features (read when building complex queries or dashboards)
Only load a references file if the current task requires it - they are long and will consume context.
References
instrumentation.md
SigNoz Instrumentation Guide
Supported languages and frameworks
Backend languages (auto + manual instrumentation)
| Language | Auto-instrumentation | Manual | Notable frameworks |
|---|---|---|---|
| Java | Yes | Yes | Spring Boot, Quarkus, JBoss, Tomcat, WildFly |
| Python | Yes | Yes | Django, Flask, FastAPI |
| Node.js | Yes | Yes | Express, NestJS, Next.js, Nuxt.js |
| Go | No (manual only) | Yes | gin, echo, gRPC |
| .NET | Yes (NuGet-based) | Yes | ASP.NET Core |
| Ruby | Yes | Yes | Rails, Sinatra |
| PHP | Yes | Yes | Laravel, Symfony |
| Rust | No (manual only) | Yes | actix-web, axum |
| Elixir | Yes | Yes | Phoenix |
| C++ | No (manual only) | Yes | - |
| Deno | No (manual only) | Yes | - |
| Swift | No (manual only) | Yes | - |
Mobile platforms
| Platform | Framework | Instrumentation type |
|---|---|---|
| Android | Java, Kotlin | Auto + Manual |
| iOS | SwiftUI | Manual |
| Cross-platform | React Native | Auto |
| Cross-platform | Flutter | Auto |
Frontend and edge
- Frontend monitoring - Browser-based tracing via OTel JS SDK
- Cloudflare Workers - Edge function instrumentation
- NGINX - Module-based instrumentation
Auto-instrumentation pattern (Node.js)
Auto-instrumentation captures HTTP requests, database calls, and framework-specific operations without code changes. Initialize before any application imports:
// tracing.js - must be loaded FIRST via -r flag or import
const { NodeSDK } = require("@opentelemetry/sdk-node");
const { getNodeAutoInstrumentations } = require("@opentelemetry/auto-instrumentations-node");
const { OTLPTraceExporter } = require("@opentelemetry/exporter-trace-otlp-grpc");
const { OTLPMetricExporter } = require("@opentelemetry/exporter-metrics-otlp-grpc");
const { PeriodicExportingMetricReader } = require("@opentelemetry/sdk-metrics");
const sdk = new NodeSDK({
serviceName: process.env.OTEL_SERVICE_NAME || "my-service",
traceExporter: new OTLPTraceExporter({
url: process.env.OTEL_EXPORTER_OTLP_ENDPOINT || "http://localhost:4317",
}),
metricReader: new PeriodicExportingMetricReader({
exporter: new OTLPMetricExporter({
url: process.env.OTEL_EXPORTER_OTLP_ENDPOINT || "http://localhost:4317",
}),
}),
instrumentations: [getNodeAutoInstrumentations()],
});
sdk.start();
process.on("SIGTERM", () => sdk.shutdown());# Run with tracing
node -r ./tracing.js app.js
# Or set via environment
export NODE_OPTIONS="--require ./tracing.js"
node app.jsAuto-instrumentation pattern (Python)
pip install opentelemetry-distro opentelemetry-exporter-otlp
opentelemetry-bootstrap -a installOTEL_SERVICE_NAME=my-python-service \
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317 \
opentelemetry-instrument python app.pyAuto-instrumentation pattern (Java)
# Download the OTel Java agent
wget https://github.com/open-telemetry/opentelemetry-java-instrumentation/releases/latest/download/opentelemetry-javaagent.jarOTEL_SERVICE_NAME=my-java-service \
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317 \
java -javaagent:opentelemetry-javaagent.jar -jar app.jarManual instrumentation - adding custom spans
When auto-instrumentation misses business-critical operations, add manual spans:
from opentelemetry import trace
tracer = trace.get_tracer("my-module")
def process_order(order_id):
with tracer.start_as_current_span("process_order") as span:
span.set_attribute("order.id", order_id)
span.set_attribute("order.type", "premium")
# business logic here
span.add_event("payment_processed", {"amount": 99.99})const { trace } = require("@opentelemetry/api");
const tracer = trace.getTracer("my-module");
function processOrder(orderId) {
return tracer.startActiveSpan("process_order", (span) => {
span.setAttribute("order.id", orderId);
try {
// business logic
span.addEvent("payment_processed", { amount: 99.99 });
} catch (err) {
span.recordException(err);
span.setStatus({ code: trace.SpanStatusCode.ERROR, message: err.message });
throw err;
} finally {
span.end();
}
});
}Recording exceptions
Auto-instrumentation records unhandled exceptions automatically for Python, Java, Ruby, and JavaScript. For other languages or custom exception tracking:
import "go.opentelemetry.io/otel/codes"
span.RecordError(err)
span.SetStatus(codes.Error, err.Error())activity?.RecordException(ex);
activity?.SetStatus(ActivityStatusCode.Error, ex.Message);span.record_exception(error)
span.status = OpenTelemetry::Trace::Status.error(error.message)SigNoz Cloud vs self-hosted endpoint config
For SigNoz Cloud, set the endpoint with TLS and ingestion key:
OTEL_EXPORTER_OTLP_ENDPOINT=https://ingest.<region>.signoz.cloud:443
OTEL_EXPORTER_OTLP_HEADERS=signoz-ingestion-key=<your-key>For self-hosted, point to your SigNoz instance (default: no auth):
OTEL_EXPORTER_OTLP_ENDPOINT=http://<signoz-host>:4317 otel-collector.md
OTel Collector Configuration for SigNoz
Installation methods
| Platform | Method | Config location |
|---|---|---|
| Linux (DEB/RPM) | Package manager, runs as systemd service | /etc/otelcol-contrib/config.yaml |
| Linux (manual) | Tarball extraction, manual process management | User-specified |
| macOS | Tarball (Intel or Apple Silicon) | User-specified |
| Windows | MSI installer, runs as Windows service | Event log integration |
Required ports: 4317 (gRPC), 4318 (HTTP), 8888 (metrics), 1777 (pprof), 13133 (health check).
Full configuration template
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
# Host metrics - CPU, memory, disk, network, load
hostmetrics:
collection_interval: 60s
scrapers:
cpu: {}
disk: {}
load: {}
filesystem: {}
memory: {}
network: {}
paging: {}
process:
mute_process_name_error: true
mute_process_exe_error: true
mute_process_io_error: true
processes: {}
# For Docker: mount host filesystem and set root_path
# root_path: /hostfs
processors:
batch:
send_batch_size: 1000
timeout: 10s
resourcedetection:
detectors: [env, system]
timeout: 2s
system:
hostname_sources: [os]
exporters:
# SigNoz Cloud
otlp/signoz-cloud:
endpoint: "ingest.<region>.signoz.cloud:443"
tls:
insecure: false
headers:
signoz-ingestion-key: "${env:SIGNOZ_INGESTION_KEY}"
# Self-hosted SigNoz
# otlp/signoz-self-hosted:
# endpoint: "<signoz-otel-collector-host>:4317"
# tls:
# insecure: true
# Debug exporter - enable to troubleshoot data flow
# debug:
# verbosity: detailed
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch, resourcedetection]
exporters: [otlp/signoz-cloud]
metrics:
receivers: [otlp, hostmetrics]
processors: [batch, resourcedetection]
exporters: [otlp/signoz-cloud]
logs:
receivers: [otlp]
processors: [batch, resourcedetection]
exporters: [otlp/signoz-cloud]Common receivers
Filelog receiver (container/application logs)
receivers:
filelog:
include: [/var/log/app/*.log]
start_at: end
operators:
- type: json_parser
timestamp:
parse_from: attributes.time
layout: "%Y-%m-%dT%H:%M:%S.%fZ"Prometheus receiver (scrape existing Prometheus targets)
receivers:
prometheus:
config:
scrape_configs:
- job_name: "my-app"
scrape_interval: 30s
static_configs:
- targets: ["localhost:9090"]Database/service receivers
The OTel Collector contrib distribution includes receivers for Redis, PostgreSQL,
MySQL, MongoDB, Kafka, RabbitMQ, Nginx, Apache, and more. Each is configured
under receivers: with service-specific connection parameters.
Docker deployment considerations
When running the collector in Docker, mount the host filesystem for hostmetrics:
# docker-compose.yaml
services:
otel-collector:
image: otel/opentelemetry-collector-contrib:latest
volumes:
- ./config.yaml:/etc/otelcol-contrib/config.yaml
- /:/hostfs:ro # Mount host root for hostmetrics
ports:
- "4317:4317"
- "4318:4318"
environment:
- SIGNOZ_INGESTION_KEY=${SIGNOZ_INGESTION_KEY}Set root_path: /hostfs in the hostmetrics receiver config.
Kubernetes deployment
Use the OpenTelemetry Operator or Helm chart for Kubernetes deployments. The collector typically runs as a DaemonSet (for node-level metrics and logs) and a Deployment (for application traces).
Troubleshooting
- Verify collector starts: Look for "Everything is ready. Begin running and processing data." in logs
- Enable debug exporter: Add
debugexporter withverbosity: detailedto verify data arrives at the collector - Check endpoint connectivity:
curl -v https://ingest.<region>.signoz.cloud:443 - Verify ports: Ensure 4317 and 4318 are not bound by another process
- Check host in SigNoz: Navigate to Infrastructure Monitoring > Hosts tab
query-builder.md
SigNoz Query Builder Reference
Query builder components
The query builder is used across Logs Explorer, Traces Explorer, Metrics Explorer, Dashboards, and Alert rules. It provides a unified interface for all three signal types.
Filtering
Filters narrow data by attribute values. Click the Search Filter field to select from available attributes.
Operators
| Operator | Description | Example |
|---|---|---|
= |
Exact match | service.name = demo-app |
!= |
Not equal | status_code != 200 |
IN |
Match any in list | method IN [GET, POST] |
NOT_IN |
Exclude list values | env NOT_IN [staging, dev] |
Multiple filters combine using AND logic.
Aggregation functions
Basic aggregations
| Function | Description |
|---|---|
| Count | Total number of matching records |
| Count Distinct | Unique values of an attribute |
| Sum | Sum of numeric attribute values |
| Avg | Average of numeric attribute values |
| Min | Minimum value |
| Max | Maximum value |
Percentile aggregations
P05, P10, P20, P25, P50, P75, P90, P95, P99 - calculate distribution percentiles for latency analysis, response times, and other numeric metrics.
Rate aggregations
| Function | Description |
|---|---|
| Rate | Per-second rate of change |
| Rate Sum | Rate of the sum |
| Rate Avg | Rate of the average |
| Rate Min | Rate of the minimum |
| Rate Max | Rate of the maximum |
Grouping
Group results by any attribute to segment data. Common groupings:
service.name- per-service breakdownmethod- HTTP method breakdownstatus_code- response code distributionhost.name- per-host analysis
When combined with aggregation: "count errors per endpoint" or "p99 latency per service".
Result manipulation
| Feature | Description | Example |
|---|---|---|
| Order By | Sort results | timestamp DESC |
| Aggregate Every | Time bucket size | 60s for 1-minute intervals |
| Limit | Cap result count | 100 |
| Having | Filter aggregated results | count > 10 |
| Legend Format | Dynamic labels | {{service.name}} - {{method}} |
Multiple queries and formulas
Execute multiple independent queries (A, B, C...) and combine them with formulas:
A / B- ratio of two queriesA - B- difference- Apply functions to queries or formula results
Mathematical functions
| Category | Functions |
|---|---|
| Trigonometric | sin, cos, tan, asin, acos, atan |
| Logarithmic | log, ln, log2, log10 |
| Statistical | sqrt, exp, abs |
| Time | now |
Metrics-specific features
Temporal vs spatial aggregation
- Temporal aggregation - consolidates data points across time (e.g. 5-minute averages)
- Spatial aggregation - merges metrics across dimensions (container names, regions)
Extended analysis functions
| Function | Description |
|---|---|
| Cut Off Min | Exclude values below threshold |
| Cut Off Max | Exclude values above threshold |
| Absolute | Convert to absolute values |
| Log (log2, log10) | Logarithmic transformation |
| EWMA 3/5/7 | Exponentially weighted moving average for smoothing |
| Time Shift | Compare with data from N seconds ago |
Functions can be chained - apply EWMA smoothing, then time shift, then cut off.
Dashboard panel types
SigNoz dashboards support:
- Time series - line/area charts for temporal data
- Bar charts - categorical comparisons
- Pie charts - proportional breakdowns
- Tables - tabular data display
- Value panels - single metric display
Dashboard management
- Drag-and-drop panel positioning
- Resize by dragging bottom-left corner
- Tag and describe dashboards for organization
- Public sharing with configurable time ranges
- Import pre-built dashboards from SigNoz GitHub repo (JSON format)
Alert query patterns
Alerts use the same query builder. Common patterns:
# High error rate alert
Signal: Logs
Filter: severity_text = ERROR
Aggregation: Count
Aggregate Every: 5m
Threshold: > 100
# Slow endpoint alert
Signal: Traces
Filter: service.name = api-gateway
Aggregation: P99(duration_nano)
Group By: operation
Threshold: > 5000000000 (5 seconds in nanoseconds)
# Host CPU alert
Signal: Metrics
Metric: system.cpu.utilization
Aggregation: Avg
Group By: host.name
Threshold: > 0.85 Frequently Asked Questions
What is signoz?
Use this skill when working with SigNoz - open-source observability platform for application monitoring, distributed tracing, log management, metrics, alerts, and dashboards. Triggers on SigNoz setup, OpenTelemetry instrumentation for SigNoz, sending traces/logs/metrics to SigNoz, creating SigNoz dashboards, configuring SigNoz alerts, exception monitoring, and migrating from Datadog/Grafana/New Relic to SigNoz.
How do I install signoz?
Run npx skills add AbsolutelySkilled/AbsolutelySkilled --skill signoz in your terminal. The skill will be immediately available in your AI coding agent.
What AI agents support signoz?
signoz works with claude-code, gemini-cli, openai-codex, mcp. Install it once and use it across any supported AI coding agent.