Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Frontend Traces Linking To Backend Traces #705

Open
tadkarshirish opened this issue Oct 11, 2024 · 2 comments
Open

Frontend Traces Linking To Backend Traces #705

tadkarshirish opened this issue Oct 11, 2024 · 2 comments
Labels
bug Report a bug

Comments

@tadkarshirish
Copy link

tadkarshirish commented Oct 11, 2024

## Description

We have enabled frontend observability for our Angular application using the Grafana Faro Web SDK and Faro Web Tracing CDN. The frontend traces are being sent to Grafana Tempo via Grafana Alloy, and they are visible in Tempo.

For our Java backend services, we are using the OpenTelemetry Operator with auto-instrumentation. Both the frontend and backend traces are being generated successfully but are appearing separately in Grafana Tempo.

According to the documentation, trace propagation should automatically link the frontend and backend traces, but this linkage is not happening in Grafana Tempo. We have confirmed that the traceparent header is present in the backend API request and matches the frontend trace. Despite this, no trace linkage is observed between the frontend and backend traces.

## Steps to reproduce

Versions:

Grafana Faro Web SDK : 1.10.2
Faro Web Tracing CDN : 1.10.2
OpenTelemetry Operator Helm Chart : v0.94.0
Java Auto-instrumentation Agent : 1.32.1

Frontend Setup:
We integrated the Grafana Faro Web SDK and Faro Web Tracing CDN into the index.html of our Angular application. The app initiates a GET API call to a backend service for data.

The Angular app sends traces correctly, and they are visible in Grafana Tempo. Below is the index.html setup used to initialize Faro:

<script>
  window.addEventListener('environmentLoaded', (event) => {
    const { app_name, app_namespace, app_env } = event.detail;

    window.faro = window.GrafanaFaroWebSdk.initializeFaro({
      url: 'https://tempo.com/collect', 
      apiKey: 'your-api-key',
      app: {
        name: app_name,
        namespace: app_namespace,
        environment: app_env,
        version: '1.0.0',
      },
      instrumentations: [...window.GrafanaFaroWebSdk.getWebInstrumentations()],
    });

    window.addTracing();
  });

  window.addTracing = () => {
    if (window.GrafanaFaroWebSdk && window.GrafanaFaroWebTracing) {
      const tracingInstrumentation = new window.GrafanaFaroWebTracing.TracingInstrumentation();
      window.faro.instrumentations.add(tracingInstrumentation);

      const { trace, context } = window.faro.api.getOTEL();
      const tracer = trace.getTracer('default');
      const span = tracer.startSpan('initialization');

      context.with(trace.setSpan(context.active(), span), () => {
        span.setAttribute('page_url', window.location.pathname);
        span.end();
      });
    }
  };
</script>

Observations:
When a frontend API call is made, we can see the traceparent header in the request headers sent to the backend service, and it matches the frontend trace:

traceparent: 00-74jfk49nslkc0485b0859djsc5-3ldskskxcc-01
this trace id send request headers to backend is same as frontend generate trace. but backend service generating new traceid for this request.
the traces do not link in Grafana Tempo.

Backend Setup:
We are using OpenTelemetry Operator with the following configuration for auto-instrumenting our Java backend services:

yaml

apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
  name: java-instrumentation
spec:
  exporter:
    endpoint: "http://centralcollector.tracing.svc.cluster.local:4317"
  propagators:
    - tracecontext
    - baggage
    - b3
  sampler:
    type: always_on
  java: 
    image: "imagepath/tracing/autoinstrumentation/java:1.32.1"
    resources:
      limits:
        cpu: 500m
        memory: 450Mi
      requests:
        cpu: 150m
        memory: 230Mi   
  env:
    - name: OTEL_METRICS_EXPORTER
      value: none
    - name: OTEL_LOG_LEVEL
      value: debug

## Expected behavior
We expect Grafana Tempo to show the complete trace flow, from the frontend request in the Angular application to the backend response in the Java service. The traces from the frontend and backend should be automatically linked using trace propagation.

@tadkarshirish tadkarshirish added the bug Report a bug label Oct 11, 2024
@undefinedhuman
Copy link

Hi there,
maybe it helps; for me the problem was my Backend did not attach the "server-timing" header back to the response:

Grafana Cloud Docs

I solved this via the recommended approach of integrating the middleware in my backend: https://grafana.com/docs/grafana-cloud/monitor-applications/frontend-observability/apm-integration/#nodejs-example

Statement from the docs:

After the server sends a server-timing header, the RUM instrumentation automatically picks it up. From the user session view, you can navigate directly to the trace that generated the response.
In this example, the initial navigation links to the request loading the page. Click the Services action to the right of the row to continue the investigation in the Application Observability space.

@tadkarshirish
Copy link
Author

Hi there,
maybe it helps; for me the problem was my Backend did not attach the "server-timing" header back to the response:

Grafana Cloud Docs

I solved this via the recommended approach of integrating the middleware in my backend: https://grafana.com/docs/grafana-cloud/monitor-applications/frontend-observability/apm-integration/#nodejs-example

Statement from the docs:

After the server sends a server-timing header, the RUM instrumentation automatically picks it up. From the user session view, you can navigate directly to the trace that generated the response.
In this example, the initial navigation links to the request loading the page. Click the Services action to the right of the row to continue the investigation in the Application Observability space.


Hi,

Thank you for your response.

After investigating further, we identified the issue was related to trace propagation. In our setup, both the frontend and backend are auto-instrumented, but the frontend was using the tracecontext format (generating traceparent), while the backend was using the b3 propagation format.

To resolve this, we updated the configuration at the OpenTelemetry Collector level to convert and align the traceparent format with b3 propagation. With this change, the propagation issue is now resolved.

For reference added collector deployment file format with updated attributes.

apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
  name: centralcollector
  labels: 
    {{- include "opentelemetry-operator.labels" . | nindent 4 }}
    {{- with .Values.CollectorServiceLabels }}
      {{- toYaml . | nindent 4 }}
    {{- end }}  
spec:
  mode: deployment
  config: |
    receivers:
      otlp:
        protocols:                                          
          grpc:
            endpoint: 0.0.0.0:4317
          http:
            cors:
              allowed_origins:
              - "*"          
            endpoint: 0.0.0.0:4318            

    processors:
      memory_limiter:
        check_interval: 1s
        limit_percentage: 75
        spike_limit_percentage: 15
      
      attributes:
        actions:
          - key: "traceparent"
            action: "insert"
            value: "00-${x-b3-traceid}-${x-b3-spanid}-01"
          - key: "x-b3-traceid"
            action: "delete"
          - key: "x-b3-spanid"
            action: "delete"
    exporters:
      logging:
        loglevel: debug
      otlphttp:
        endpoint: {{ .Values.endpoint }}
        tls:
          insecure: false                  

    service:
      pipelines:
        traces:
          receivers: [otlp]
          processors: [memory_limiter, attributes]  # Order matters
          exporters: [logging, otlphttp]

Thank you for your help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Report a bug
Projects
None yet
Development

No branches or pull requests

2 participants