Skip to content

Conversation

wololowarrior
Copy link
Contributor

@wololowarrior wololowarrior commented Sep 7, 2025

Which problem is this PR solving?

Description of the changes

Links to:

Changes:

  • This pr adds support for filtering SPM using attributes like environment or any other defined attribute.

How was this change tested?

Prometheus Setup

jaeger/cmd/jaeger/config-spm.yaml

service:
  extensions: [jaeger_storage, jaeger_query]
  pipelines:
    traces:
      receivers: [otlp]
      processors: [attributes, batch]
      exporters: [jaeger_storage_exporter, spanmetrics]
    metrics/spanmetrics:
      receivers: [spanmetrics]
      exporters: [prometheus]
  telemetry:
    resource:
      service.name: jaeger
    metrics:
      level: detailed
      readers:
        - pull:
            exporter:
              prometheus:
                host: 0.0.0.0
                port: 8888
    logs:
      level: INFO

extensions:
  jaeger_query:
    storage:
      traces: some_storage
      metrics: some_metrics_storage
  jaeger_storage:
    backends:
      some_storage:
        memory:
          max_traces: 100000
    metric_backends:
      some_metrics_storage:
        prometheus:
          endpoint: http://prometheus:9090
          normalize_calls: true
          normalize_duration: true

# The spanmetrics connector generates metrics from spans
connectors:
  spanmetrics:
    dimensions:
      - name: operation
        default: unknown-operation
      # Add tag dimensions for filtering
      - name: environment
        default: ""
      - name: region
        default: ""
    dimensions_cache_size: 1000
    aggregation_temporality: "AGGREGATION_TEMPORALITY_CUMULATIVE"
    metrics_flush_interval: 15s

receivers:
  otlp:
    protocols:
      grpc:
      http:
        endpoint: "0.0.0.0:4318"

processors:
  # The attributes processor copies resource attributes to span attributes
  # This is required for tag filtering in metrics
  attributes:
    actions:
      # Copy resource attributes to span attributes for metrics filtering
      - key: environment
        action: insert
        from_attribute: environment
      - key: region
        action: insert
        from_attribute: region
  batch:

exporters:
  jaeger_storage_exporter:
    trace_storage: some_storage
  prometheus:
    endpoint: "0.0.0.0:8889"

jaeger/docker-compose/monitor/docker-compose.yml

services:
  jaeger:
    networks:
      backend:
        # This is the host name used in Prometheus scrape configuration.
        aliases: [ spm_metrics_source ]
    image: jaegertracing/jaeger:${JAEGER_VERSION:-latest}
    volumes:
      - "./jaeger-ui.json:/etc/jaeger/jaeger-ui.json" # Do we need this for v2 ? Seems to be running without this.
      - "../../cmd/jaeger/config-spm.yaml:/etc/jaeger/config.yml"
    command: ["--config", "/etc/jaeger/config.yml"]
    ports:
      - "16686:16686"
      - "8888:8888"
      - "8889:8889"
      - "4317:4317"
      - "4318:4318"

  microsim:
    networks:
      - backend
    image: yurishkuro/microsim:v0.5.0@sha256:b7ee2dee51d2c9fd94de08a80278cfbf5a144ad0f22efce50f3d3be15cbfa2c7
    command: "-d 24h -s 500ms"
    environment:
      - OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://jaeger:4318/v1/traces
      - OTEL_RESOURCE_ATTRIBUTES=environment=production,region=us-east-1
    depends_on:
      - jaeger

  microsim-2:
    networks:
      - backend
    image: yurishkuro/microsim:v0.5.0@sha256:b7ee2dee51d2c9fd94de08a80278cfbf5a144ad0f22efce50f3d3be15cbfa2c7
    command: "-d 24h -s 5000ms"
    environment:
      - OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://jaeger:4318/v1/traces
      - OTEL_RESOURCE_ATTRIBUTES=environment=staging,region=us-east-2
    depends_on:
      - jaeger

  prometheus:
    networks:
      - backend
    image: prom/prometheus:v3.5.0@sha256:63805ebb8d2b3920190daf1cb14a60871b16fd38bed42b857a3182bc621f4996
    volumes:
      - "./prometheus.yml:/etc/prometheus/prometheus.yml"
    ports:
      - "9090:9090"

networks:
  backend:

  • API calls
curl "http://localhost:16686/api/metrics/latencies?service=redis&groupByOperation=true&lookback=900000&quantile=0.95&ratePer=600000&spanKind=server&step=60000&tag=environment:staging" | jq
Response

{
  "name": "service_operation_latencies",
  "type": "GAUGE",
  "help": "0.95th quantile latency, grouped by service & operation",
  "metrics": [
    {
      "labels": [
        {
          "name": "service_name",
          "value": "redis"
        },
        {
          "name": "operation",
          "value": "/FindDriverIDs"
        }
      ],
      "metricPoints": [
        {
          "gaugeValue": {
            "doubleValue": 47.94642857142857
          },
          "timestamp": "2025-09-09T07:26:24.495Z"
        },
        {
          "gaugeValue": {
            "doubleValue": 47.958456973293764
          },
          "timestamp": "2025-09-09T07:27:24.495Z"
        },
        {
          "gaugeValue": {
            "doubleValue": 47.952095808383234
          },
          "timestamp": "2025-09-09T07:28:24.495Z"
        },
        {
          "gaugeValue": {
            "doubleValue": 47.95833333333333
          },
          "timestamp": "2025-09-09T07:29:24.495Z"
        },
        {
          "gaugeValue": {
            "doubleValue": 47.9646017699115
          },
          "timestamp": "2025-09-09T07:30:24.495Z"
        },
        {
          "gaugeValue": {
            "doubleValue": 47.96449704142012
          },
          "timestamp": "2025-09-09T07:31:24.495Z"
        },
        {
          "gaugeValue": {
            "doubleValue": 47.964285714285715
          },
          "timestamp": "2025-09-09T07:32:24.495Z"
        },
        {
          "gaugeValue": {
            "doubleValue": 47.958456973293764
          },
          "timestamp": "2025-09-09T07:33:24.495Z"
        },
        {
          "gaugeValue": {
            "doubleValue": 47.9646017699115
          },
          "timestamp": "2025-09-09T07:34:24.495Z"
        },
        {
          "gaugeValue": {
            "doubleValue": 47.96439169139466
          },
          "timestamp": "2025-09-09T07:35:24.495Z"
        },
        {
          "gaugeValue": {
            "doubleValue": 47.96428571428571
          },
          "timestamp": "2025-09-09T07:36:24.495Z"
        },
        {
          "gaugeValue": {
            "doubleValue": 47.96439169139467
          },
          "timestamp": "2025-09-09T07:37:24.495Z"
        },
        {
          "gaugeValue": {
            "doubleValue": 47.97058823529411
          },
          "timestamp": "2025-09-09T07:38:24.495Z"
        },
        {
          "gaugeValue": {
            "doubleValue": 47.95833333333333
          },
          "timestamp": "2025-09-09T07:39:24.495Z"
        },
        {
          "gaugeValue": {
            "doubleValue": 47.95833333333333
          },
          "timestamp": "2025-09-09T07:40:24.495Z"
        },
        {
          "gaugeValue": {
            "doubleValue": 47.95223880597015
          },
          "timestamp": "2025-09-09T07:41:24.495Z"
        }
      ]
    },
    {
      "labels": [
        {
          "name": "operation",
          "value": "/GetDriver"
        },
        {
          "name": "service_name",
          "value": "redis"
        }
      ],
      "metricPoints": [
        {
          "gaugeValue": {
            "doubleValue": 47.9391143911439
          },
          "timestamp": "2025-09-09T07:26:24.495Z"
        },
        {
          "gaugeValue": {
            "doubleValue": 47.93408662900188
          },
          "timestamp": "2025-09-09T07:27:24.495Z"
        },
        {
          "gaugeValue": {
            "doubleValue": 47.92982456140351
          },
          "timestamp": "2025-09-09T07:28:24.495Z"
        },
        {
          "gaugeValue": {
            "doubleValue": 47.932047750229565
          },
          "timestamp": "2025-09-09T07:29:24.495Z"
        },
        {
          "gaugeValue": {
            "doubleValue": 47.93072014585232
          },
          "timestamp": "2025-09-09T07:30:24.495Z"
        },
        {
          "gaugeValue": {
            "doubleValue": 47.93235831809872
          },
          "timestamp": "2025-09-09T07:31:24.495Z"
        },
        {
          "gaugeValue": {
            "doubleValue": 47.940465116279064
          },
          "timestamp": "2025-09-09T07:32:24.495Z"
        },
        {
          "gaugeValue": {
            "doubleValue": 47.940298507462686
          },
          "timestamp": "2025-09-09T07:33:24.495Z"
        },
        {
          "gaugeValue": {
            "doubleValue": 47.94216417910447
          },
          "timestamp": "2025-09-09T07:34:24.495Z"
        },
        {
          "gaugeValue": {
            "doubleValue": 47.94066985645933
          },
          "timestamp": "2025-09-09T07:35:24.495Z"
        },
        {
          "gaugeValue": {
            "doubleValue": 47.93767705382436
          },
          "timestamp": "2025-09-09T07:36:24.495Z"
        },
        {
          "gaugeValue": {
            "doubleValue": 47.94579439252337
          },
          "timestamp": "2025-09-09T07:37:24.495Z"
        },
        {
          "gaugeValue": {
            "doubleValue": 47.94829178208679
          },
          "timestamp": "2025-09-09T07:38:24.495Z"
        },
        {
          "gaugeValue": {
            "doubleValue": 47.95229357798165
          },
          "timestamp": "2025-09-09T07:39:24.495Z"
        },
        {
          "gaugeValue": {
            "doubleValue": 47.952511415525116
          },
          "timestamp": "2025-09-09T07:40:24.495Z"
        },
        {
          "gaugeValue": {
            "doubleValue": 47.95176252319109
          },
          "timestamp": "2025-09-09T07:41:24.495Z"
        }
      ]
    }
  ]
}

  • Negative Test
curl "http://localhost:16686/api/metrics/latencies?service=redis&groupByOperation=true&lookback=900000&quantile=0.95&ratePer=600000&spanKind=server&step=60000&tag=environment:staging1" | jq
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   136  100   136    0     0   7198      0 --:--:-- --:--:-- --:--:--  7555
{
  "name": "service_operation_latencies",
  "type": "GAUGE",
  "help": "0.95th quantile latency, grouped by service & operation",
  "metrics": []
}

Checklist

Copy link

codecov bot commented Sep 7, 2025

Codecov Report

❌ Patch coverage is 95.94595% with 3 lines in your changes missing coverage. Please review.
✅ Project coverage is 96.61%. Comparing base (fa9a5e7) to head (db45bb3).

Files with missing lines Patch % Lines
...ernal/storage/v1/elasticsearch/spanstore/reader.go 0.00% 2 Missing ⚠️
...storage/metricstore/elasticsearch/query_builder.go 88.88% 0 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #7504      +/-   ##
==========================================
+ Coverage   96.59%   96.61%   +0.01%     
==========================================
  Files         377      377              
  Lines       23084    23142      +58     
==========================================
+ Hits        22299    22359      +60     
+ Misses        598      596       -2     
  Partials      187      187              
Flag Coverage Δ
badger_v1 9.02% <0.00%> (-0.01%) ⬇️
badger_v2 1.69% <0.00%> (-0.01%) ⬇️
cassandra-4.x-v1-manual 11.68% <0.00%> (-0.01%) ⬇️
cassandra-4.x-v2-auto 1.68% <0.00%> (-0.01%) ⬇️
cassandra-4.x-v2-manual 1.68% <0.00%> (-0.01%) ⬇️
cassandra-5.x-v1-manual 11.68% <0.00%> (-0.01%) ⬇️
cassandra-5.x-v2-auto 1.68% <0.00%> (-0.01%) ⬇️
cassandra-5.x-v2-manual 1.68% <0.00%> (-0.01%) ⬇️
elasticsearch-6.x-v1 16.59% <0.00%> (-0.01%) ⬇️
elasticsearch-7.x-v1 16.63% <0.00%> (-0.01%) ⬇️
elasticsearch-8.x-v1 16.77% <0.00%> (-0.01%) ⬇️
elasticsearch-8.x-v2 1.69% <0.00%> (-0.01%) ⬇️
elasticsearch-9.x-v2 1.69% <0.00%> (-0.01%) ⬇️
grpc_v1 10.23% <0.00%> (-0.01%) ⬇️
grpc_v2 1.69% <0.00%> (-0.01%) ⬇️
kafka-3.x-v1 9.68% <0.00%> (-0.01%) ⬇️
kafka-3.x-v2 1.69% <0.00%> (-0.01%) ⬇️
memory_v2 1.69% <0.00%> (-0.01%) ⬇️
opensearch-1.x-v1 16.67% <0.00%> (-0.01%) ⬇️
opensearch-2.x-v1 16.67% <0.00%> (-0.01%) ⬇️
opensearch-2.x-v2 1.69% <0.00%> (-0.01%) ⬇️
opensearch-3.x-v2 1.69% <0.00%> (-0.01%) ⬇️
query 1.69% <0.00%> (-0.01%) ⬇️
tailsampling-processor 0.46% <0.00%> (-0.01%) ⬇️
unittests 95.60% <95.94%> (+0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

timeNow: time.Now,
}
_, err = parser.parseMetricsQueryParams(request)
require.Error(t, err)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use errorContains to be specific

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there is a function require.ErrorContains(), you don't need to have two separate checks.

}

// NewQueryBuilder creates a new QueryBuilder instance.
func NewQueryBuilder(client es.Client, cfg config.Configuration, logger *zap.Logger) *QueryBuilder {
// Create SpanReader parameters for reusing tag query logic
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Query builder sounds like something that builds the query, not something that executes it. Why does it need reader?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wanted to reuse

func (s *SpanReader) buildTagQuery(k string, v string) elastic.Query {
objectTagListLen := len(objectTagFieldList)
queries := make([]elastic.Query, len(nestedTagFieldList)+objectTagListLen)
kd := s.dotReplacer.ReplaceDot(k)
for i := range objectTagFieldList {
queries[i] = s.buildObjectQuery(objectTagFieldList[i], kd, v)
}
for i := range nestedTagFieldList {
queries[i+objectTagListLen] = s.buildNestedQuery(nestedTagFieldList[i], k, v)
}
// but configuration can change over time
return elastic.NewBoolQuery().Should(queries...)
}

It defines a Global aggregation of spans based on tags.

Also, it was suggested by @pipiland2612 in the issue meta.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wanted to reuse

no, this is a very bad way to organize the code and create cross-module dependencies. If we need this shared functionality let's do some refactoring first that moves it to a shared components that move span storage and metrics storage can reuse.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not to mention how you would even pass ES Span Reader since the storage extension instantiates those two implementations completely independently.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moving to a shared location makes sense? Should i put it under jaeger/internal/storage/elasticsearch ? Maybe a new file under query/

"http.method": "GET",
},
wantPromQlQuery: `histogram_quantile(0.95, sum(rate(duration_bucket{service_name =~ "emailservice", ` +
`span_kind =~ "SPAN_KIND_SERVER", http_method="GET"}[10m])) by (service_name,le))`,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we have tildes in different places for some of the labels?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's because I've used exact match for labels in promql.
However, the existing code had a regex match for the service name and span kind. Thus they add a tilda to the query.

Signed-off-by: Harshil Gupta <harshilgupta1808@gmail.com>
Signed-off-by: Harshil Gupta <harshilgupta1808@gmail.com>
Signed-off-by: Harshil Gupta <harshilgupta1808@gmail.com>
@wololowarrior wololowarrior force-pushed the support-metrics-filtering-elasticsearch branch from f10bcb5 to 5fe9866 Compare September 8, 2025 12:53
Copy link

github-actions bot commented Sep 8, 2025

Metrics Comparison Summary

Total changes across all snapshots: 53

Detailed changes per snapshot

summary_metrics_snapshot_cassandra

📊 Metrics Diff Summary

Total Changes: 53

  • 🆕 Added: 53 metrics
  • ❌ Removed: 0 metrics
  • 🔄 Modified: 0 metrics

🆕 Added Metrics

  • http_server_request_body_size_bytes (18 variants)
View diff sample
+http_server_request_body_size_bytes{http_request_method="GET",http_response_status_code="503",le="+Inf",network_protocol_name="http",network_protocol_version="1.1",otel_scope_name="go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp",otel_scope_schema_url="",otel_scope_version="0.62.0",server_address="localhost",server_port="13133",url_scheme="http"}
+http_server_request_body_size_bytes{http_request_method="GET",http_response_status_code="503",le="0",network_protocol_name="http",network_protocol_version="1.1",otel_scope_name="go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp",otel_scope_schema_url="",otel_scope_version="0.62.0",server_address="localhost",server_port="13133",url_scheme="http"}
+http_server_request_body_size_bytes{http_request_method="GET",http_response_status_code="503",le="10",network_protocol_name="http",network_protocol_version="1.1",otel_scope_name="go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp",otel_scope_schema_url="",otel_scope_version="0.62.0",server_address="localhost",server_port="13133",url_scheme="http"}
+http_server_request_body_size_bytes{http_request_method="GET",http_response_status_code="503",le="100",network_protocol_name="http",network_protocol_version="1.1",otel_scope_name="go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp",otel_scope_schema_url="",otel_scope_version="0.62.0",server_address="localhost",server_port="13133",url_scheme="http"}
+http_server_request_body_size_bytes{http_request_method="GET",http_response_status_code="503",le="1000",network_protocol_name="http",network_protocol_version="1.1",otel_scope_name="go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp",otel_scope_schema_url="",otel_scope_version="0.62.0",server_address="localhost",server_port="13133",url_scheme="http"}
+http_server_request_body_size_bytes{http_request_method="GET",http_response_status_code="503",le="10000",network_protocol_name="http",network_protocol_version="1.1",otel_scope_name="go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp",otel_scope_schema_url="",otel_scope_version="0.62.0",server_address="localhost",server_port="13133",url_scheme="http"}
+http_server_request_body_size_bytes{http_request_method="GET",http_response_status_code="503",le="25",network_protocol_name="http",network_protocol_version="1.1",otel_scope_name="go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp",otel_scope_schema_url="",otel_scope_version="0.62.0",server_address="localhost",server_port="13133",url_scheme="http"}
...
- `http_server_request_duration_seconds` (17 variants)
View diff sample
+http_server_request_duration_seconds{http_request_method="GET",http_response_status_code="503",le="+Inf",network_protocol_name="http",network_protocol_version="1.1",otel_scope_name="go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp",otel_scope_schema_url="",otel_scope_version="0.62.0",server_address="localhost",server_port="13133",url_scheme="http"}
+http_server_request_duration_seconds{http_request_method="GET",http_response_status_code="503",le="0.005",network_protocol_name="http",network_protocol_version="1.1",otel_scope_name="go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp",otel_scope_schema_url="",otel_scope_version="0.62.0",server_address="localhost",server_port="13133",url_scheme="http"}
+http_server_request_duration_seconds{http_request_method="GET",http_response_status_code="503",le="0.01",network_protocol_name="http",network_protocol_version="1.1",otel_scope_name="go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp",otel_scope_schema_url="",otel_scope_version="0.62.0",server_address="localhost",server_port="13133",url_scheme="http"}
+http_server_request_duration_seconds{http_request_method="GET",http_response_status_code="503",le="0.025",network_protocol_name="http",network_protocol_version="1.1",otel_scope_name="go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp",otel_scope_schema_url="",otel_scope_version="0.62.0",server_address="localhost",server_port="13133",url_scheme="http"}
+http_server_request_duration_seconds{http_request_method="GET",http_response_status_code="503",le="0.05",network_protocol_name="http",network_protocol_version="1.1",otel_scope_name="go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp",otel_scope_schema_url="",otel_scope_version="0.62.0",server_address="localhost",server_port="13133",url_scheme="http"}
+http_server_request_duration_seconds{http_request_method="GET",http_response_status_code="503",le="0.075",network_protocol_name="http",network_protocol_version="1.1",otel_scope_name="go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp",otel_scope_schema_url="",otel_scope_version="0.62.0",server_address="localhost",server_port="13133",url_scheme="http"}
+http_server_request_duration_seconds{http_request_method="GET",http_response_status_code="503",le="0.1",network_protocol_name="http",network_protocol_version="1.1",otel_scope_name="go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp",otel_scope_schema_url="",otel_scope_version="0.62.0",server_address="localhost",server_port="13133",url_scheme="http"}
...
- `http_server_response_body_size_bytes` (18 variants)
View diff sample
+http_server_response_body_size_bytes{http_request_method="GET",http_response_status_code="503",le="+Inf",network_protocol_name="http",network_protocol_version="1.1",otel_scope_name="go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp",otel_scope_schema_url="",otel_scope_version="0.62.0",server_address="localhost",server_port="13133",url_scheme="http"}
+http_server_response_body_size_bytes{http_request_method="GET",http_response_status_code="503",le="0",network_protocol_name="http",network_protocol_version="1.1",otel_scope_name="go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp",otel_scope_schema_url="",otel_scope_version="0.62.0",server_address="localhost",server_port="13133",url_scheme="http"}
+http_server_response_body_size_bytes{http_request_method="GET",http_response_status_code="503",le="10",network_protocol_name="http",network_protocol_version="1.1",otel_scope_name="go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp",otel_scope_schema_url="",otel_scope_version="0.62.0",server_address="localhost",server_port="13133",url_scheme="http"}
+http_server_response_body_size_bytes{http_request_method="GET",http_response_status_code="503",le="100",network_protocol_name="http",network_protocol_version="1.1",otel_scope_name="go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp",otel_scope_schema_url="",otel_scope_version="0.62.0",server_address="localhost",server_port="13133",url_scheme="http"}
+http_server_response_body_size_bytes{http_request_method="GET",http_response_status_code="503",le="1000",network_protocol_name="http",network_protocol_version="1.1",otel_scope_name="go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp",otel_scope_schema_url="",otel_scope_version="0.62.0",server_address="localhost",server_port="13133",url_scheme="http"}
+http_server_response_body_size_bytes{http_request_method="GET",http_response_status_code="503",le="10000",network_protocol_name="http",network_protocol_version="1.1",otel_scope_name="go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp",otel_scope_schema_url="",otel_scope_version="0.62.0",server_address="localhost",server_port="13133",url_scheme="http"}
+http_server_response_body_size_bytes{http_request_method="GET",http_response_status_code="503",le="25",network_protocol_name="http",network_protocol_version="1.1",otel_scope_name="go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp",otel_scope_schema_url="",otel_scope_version="0.62.0",server_address="localhost",server_port="13133",url_scheme="http"}
...

➡️ View full metrics file

@wololowarrior wololowarrior changed the title Support metrics using attributes filtering prometheus and elastic search [SPM] Filter metrics on page by tag Sep 9, 2025
Signed-off-by: Harshil Gupta <harshilgupta1808@gmail.com>
@wololowarrior wololowarrior force-pushed the support-metrics-filtering-elasticsearch branch from 58bc77a to db45bb3 Compare September 9, 2025 16:07
@wololowarrior wololowarrior marked this pull request as ready for review September 11, 2025 09:25
@wololowarrior wololowarrior requested a review from a team as a code owner September 11, 2025 09:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants