Skip to content

Latest commit

 

History

History
2304 lines (1949 loc) · 83.7 KB

README.adoc

File metadata and controls

2304 lines (1949 loc) · 83.7 KB

opentelemetry-tracing: OpenTelemetry Tracing QuickStart

The opentelemetry-tracing quickstart demonstrates the use of the OpenTelemetry tracing specification in WildFly.

What is it?

OpenTelemetry is a set of APIs, SDKs, tooling and integrations that are designed for the creation and management of telemetry data such as traces, metrics, and logs. OpenTelemetry support in WildFly is limited to traces only. WildFly’s support of OpenTelemetry provides out of the box tracing of Jakarta REST calls, as well as container-managed Jakarta REST Client invocations. Additionally, applications can have injected a Tracer instance in order to create and manage custom `Span`s as a given application may require. These traces are exported to an OpenTelemetry Collector instance listening on the same host.

Architecture

In this quickstart, we have a collection of CDI beans and REST endpoints that expose functionalities of the OpenTelemetry support in WildFly.

Use of the WILDFLY_HOME and QUICKSTART_HOME Variables

In the following instructions, replace WILDFLY_HOME with the actual path to your WildFly installation. The installation path is described in detail here: Use of WILDFLY_HOME and JBOSS_HOME Variables.

When you see the replaceable variable QUICKSTART_HOME, replace it with the path to the root directory of all of the quickstarts.

Prerequisites

To complete this guide, you will need:

  • less than 15 minutes

  • JDK 11+ installed with JAVA_HOME configured appropriately

  • Apache Maven 3.5.3+

  • Docker Compose, or alternatively Podman Compose

Use of the WILDFLY_HOME and QUICKSTART_HOME Variables

In the following instructions, replace WILDFLY_HOME with the actual path to your WildFly installation. The installation path is described in detail here: Use of WILDFLY_HOME and JBOSS_HOME Variables.

When you see the replaceable variable QUICKSTART_HOME, replace it with the path to the root directory of all of the quickstarts.

Steps

Start the WildFly Standalone Server

  1. Open a terminal and navigate to the root of the WildFly directory.

  2. Start the WildFly server with the default profile by typing the following command.

    $ WILDFLY_HOME/bin/standalone.sh 
    Note
    For Windows, use the WILDFLY_HOME\bin\standalone.bat script.

Configure the Server

You enable OpenTelemetry by running JBoss CLI commands. For your convenience, this quickstart batches the commands into a configure-opentelemtry.cli script provided in the root directory of this quickstart.

  1. Before you begin, make sure you do the following:

  2. Review the configure-opentelemtry.cli file in the root of this quickstart directory. This script adds the configuration that enables OpenTelemetry for the quickstart components. Comments in the script describe the purpose of each block of commands.

  3. Open a new terminal, navigate to the root directory of this quickstart, and run the following command, replacing WILDFLY_HOME with the path to your server:

    $ WILDFLY_HOME/bin/jboss-cli.sh --connect --file=configure-opentelemetry.cli
    Note
    For Windows, use the WILDFLY_HOME\bin\jboss-cli.bat script.

    You should see the following result when you run the script:

    The batch executed successfully
    process-state: reload-required
  4. You’ll need to reload the configuration after that:

    $ WILDFLY_HOME/bin/jboss-cli.sh --connect --commands=reload

Starting the OpenTelemetry Collector

By default, WildFly will publish traces every 10 seconds, so you will soon start seeing errors about a refused connection.

This is because we told WildFly to publish to a server that is not there, so we need to fix that. To make that as simple as possible, you can use Docker Compose to start an instance of the OpenTelemetry Collector.

The Docker Compose configuration file is docker-compose.yaml:

version: "3"
volumes:
  shared-volume:
    # - logs:/var/log
services:
  otel-collector:
    image: otel/opentelemetry-collector:0.89.0
    command: [--config=/etc/otel-collector-config.yaml]
    volumes:
      - ./otel-collector-config.yaml:/etc/otel-collector-config.yaml:Z
    ports:
      - 1888:1888 # pprof extension
      - 13133:13133 # health_check extension
      - 4317:4317 # OTLP gRPC receiver
      - 4318:4318 # OTLP http receiver
      - 55679:55679 # zpages extension

The Collector server configuration file is otel-collector-config.yaml:

extensions:
  health_check:
  pprof:
    endpoint: 0.0.0.0:1777
  zpages:
    endpoint: 0.0.0.0:55679

receivers:
  otlp:
    protocols:
      grpc:
      http:

processors:
  batch:

exporters:
  logging:
    verbosity: detailed

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: []
      exporters: [logging]

  extensions: [health_check, pprof, zpages]

We can now bring up the collector instance:

$ docker-compose up

The service should be available almost immediately, which you can verify by looking for the log entry Everything is ready. Begin running and processing data..

Note

You may use Podman as alternative to Docker if you prefer, in such case the command should be podman-compose up.

Note

If your environment does not support Docker or Podman, please refer to Otel Collector documentation for alternatives on how to install and run the OpenTelemetry Collector. Please ensure the same OpenTelemetry version as the one in the docker-compose.yaml above is used, otherwise such configuration may fail to work.

Note

Part of the value of OpenTelemetry is its vendor-agnostic approach to exporting its various supported signals. As such, this demo will only log the incoming traces, leaving the relaying of those signals to a downstream aggregation platform as an exercise for the reader.

Now we can start adding our custom spans from our application.

Creating traces

Implicit tracing of REST resources

The OpenTelemetry support in WildFly provides an implicit tracing of all Jakarta REST resources. That means that for all applications, WildFly will automatically:

  • extract the Span context from the incoming Jakarta REST request

  • start a new Span on incoming Jakarta REST request and close it when the request is completed

  • inject Span context to any outgoing Jakarta REST request

  • start a Span for any outgoing Jakarta REST request and finish the Span when the request is completed

Explicit tracing

The OpenTelemetry API also supports explicit tracing should your application required it:

package org.wildfly.quickstarts.opentelemetry;

import jakarta.enterprise.context.RequestScoped;
import jakarta.inject.Inject;

import io.opentelemetry.api.trace.Span;
import io.opentelemetry.api.trace.Tracer;

@RequestScoped
public class ExplicitlyTracedBean {

    @Inject
    private Tracer tracer;

    public String getHello() {
        Span prepareHelloSpan = tracer.spanBuilder("prepare-hello").startSpan();
        prepareHelloSpan.makeCurrent();

        String hello = "hello";

        Span processHelloSpan = tracer.spanBuilder("process-hello").startSpan();
        processHelloSpan.makeCurrent();

        hello = hello.toUpperCase();

        processHelloSpan.end();
        prepareHelloSpan.end();

        return hello;
    }
}

Build and Deploy the Quickstart

  1. Make sure WildFly server is started.

  2. Open a terminal and navigate to the root directory of this quickstart.

  3. Type the following command to build the quickstart.

    $ mvn clean package
  4. Type the following command to deploy the quickstart.

    $ mvn wildfly:deploy

This deploys the opentelemetry-tracing/target/opentelemetry-tracing.war to the running instance of the server.

You should see a message in the server log indicating that the archive deployed successfully.

Access the quickstart application

You can either access the application via your browser at http://localhost:8080/opentelemetry-tracing/implicit-trace, or http://localhost:8080/opentelemetry-tracing/explicit-trace. You can also access it from the command line:

$ curl http://localhost:8080/opentelemetry-tracing/implicit-trace
$ curl http://localhost:8080/opentelemetry-tracing/explicit-trace

Either endpoint should return a simple document:

hello

View the traces

You can view the traces by looking at the Collector’s log. You should see something like this:

otel-collector_1  | 2023-12-13T21:05:28.002Z    info    TracesExporter  {"kind": "exporter", "data_type": "traces", "name": "logging", "resource spans": 1, "spans": 1}
otel-collector_1  | 2023-12-13T21:05:28.002Z    info    ResourceSpans #0
otel-collector_1  | Resource SchemaURL: https://opentelemetry.io/schemas/1.20.0
otel-collector_1  | Resource attributes:
otel-collector_1  |      -> service.name: Str(opentelemetry-tracing.war)
otel-collector_1  |      -> telemetry.sdk.language: Str(java)
otel-collector_1  |      -> telemetry.sdk.name: Str(opentelemetry)
otel-collector_1  |      -> telemetry.sdk.version: Str(1.29.0)
otel-collector_1  | ScopeSpans #0
otel-collector_1  | ScopeSpans SchemaURL:
otel-collector_1  | InstrumentationScope io.smallrye.opentelemetry 2.6.0
otel-collector_1  | Span #0
otel-collector_1  |     Trace ID       : c761e8fadec36d222adac36dcff1f4b1
otel-collector_1  |     Parent ID      :
otel-collector_1  |     ID             : 08f93dd25f75b5cd
otel-collector_1  |     Name           : GET /opentelemetry-tracing/implicit-trace
otel-collector_1  |     Kind           : Server
otel-collector_1  |     Start time     : 2023-12-13 21:05:20.560054393 +0000 UTC
otel-collector_1  |     End time       : 2023-12-13 21:05:20.621635685 +0000 UTC
otel-collector_1  |     Status code    : Unset
otel-collector_1  |     Status message :
otel-collector_1  | Attributes:
otel-collector_1  |      -> net.host.port: Int(8080)
otel-collector_1  |      -> http.scheme: Str(http)
otel-collector_1  |      -> http.method: Str(GET)
otel-collector_1  |      -> http.status_code: Int(200)
otel-collector_1  |      -> net.transport: Str(ip_tcp)
otel-collector_1  |      -> user_agent.original: Str(curl/8.2.1)
otel-collector_1  |      -> net.host.name: Str(localhost)
otel-collector_1  |      -> http.route: Str(/opentelemetry-tracing/implicit-trace)
otel-collector_1  |      -> http.target: Str(/opentelemetry-tracing/implicit-trace)
otel-collector_1  |      -> net.sock.host.addr: Str(127.0.0.1)
otel-collector_1  |     {"kind": "exporter", "data_type": "traces", "name": "logging"}

Run the Integration Tests

This quickstart includes integration tests, which are located under the src/test/ directory. The integration tests verify that the quickstart runs correctly when deployed on the server.

Follow these steps to run the integration tests.

  1. Make sure WildFly server is started.

  2. Make sure the quickstart is deployed.

  3. Type the following command to run the verify goal with the integration-testing profile activated.

    $ mvn verify -Pintegration-testing 

Undeploy the Quickstart

When you are finished testing the quickstart, follow these steps to undeploy the archive.

  1. Make sure WildFly server is started.

  2. Open a terminal and navigate to the root directory of this quickstart.

  3. Type this command to undeploy the archive:

    $ mvn wildfly:undeploy

Restore the WildFly Standalone Server Configuration

You can restore the original server configuration using either of the following methods.

Restore the WildFly Standalone Server Configuration by Running the JBoss CLI Script

  1. Start the WildFly server as described above.

  2. Open a new terminal, navigate to the root directory of this quickstart, and run the following command, replacing WILDFLY_HOME with the path to your server:

    $ WILDFLY_HOME/bin/jboss-cli.sh --connect --file=restore-configuration.cli
    Note
    For Windows, use the WILDFLY_HOME\bin\jboss-cli.bat script.

Restore the WildFly Standalone Server Configuration Manually

When you have completed testing the quickstart, you can restore the original server configuration by manually restoring the backup copy the configuration file.

  1. If it is running, stop the WildFly server.

  2. Replace the WILDFLY_HOME/standalone/configuration/standalone.xml file with the backup copy of the file.

Building and running the quickstart application with provisioned WildFly server

Instead of using a standard WildFly server distribution, you can alternatively provision a WildFly server to deploy and run the quickstart. The functionality is provided by the WildFly Maven Plugin, and you may find its configuration in the quickstart pom.xml:

        <profile>
            <id>provisioned-server</id>
            <activation>
                <activeByDefault>true</activeByDefault>
            </activation>
            <build>
                <plugins>
                    <plugin>
                        <groupId>org.wildfly.plugins</groupId>
                        <artifactId>wildfly-maven-plugin</artifactId>
                        <configuration>
                            <discover-provisioning-info>
                                <version>${version.server}</version>
                            </discover-provisioning-info>
                            <add-ons>...</add-ons>
                        </configuration>
                        <executions>
                            <execution>
                                <goals>
                                    <goal>package</goal>
                                </goals>
                            </execution>
                        </executions>
                    </plugin>
                    ...
                </plugins>
            </build>
        </profile>

When built, the provisioned WildFly server can be found in the target/server directory, and its usage is similar to a standard server distribution, with the simplification that there is never the need to specify the server configuration to be started.

Follow these steps to run the quickstart using the provisioned server.

Procedure
  1. Make sure the server is provisioned.

    $ mvn clean package
  2. Start the WildFly provisioned server, using the WildFly Maven Plugin start goal.

    $ mvn wildfly:start 
  3. Type the following command to run the integration tests.

    $ mvn verify -Pintegration-testing 
  4. Shut down the WildFly provisioned server.

    $ mvn wildfly:shutdown

Building and Running the quickstart application in a bootable JAR

You can use the WildFly Maven Plugin to build a WildFly bootable JAR to run this quickstart.

The quickstart pom.xml file contains a Maven profile named bootable-jar, which activates the bootable JAR packaging when provisioning WildFly, through the <bootable-jar>true</bootable-jar> configuration element:

      <profile>
          <id>bootable-jar</id>
          <activation>
              <activeByDefault>true</activeByDefault>
          </activation>
          <build>
              <plugins>
                  <plugin>
                      <groupId>org.wildfly.plugins</groupId>
                      <artifactId>wildfly-maven-plugin</artifactId>
                      <configuration>
                          <discover-provisioning-info>
                              <version>${version.server}</version>
                          </discover-provisioning-info>
                          <bootable-jar>true</bootable-jar>
                          <add-ons>...</add-ons>
                      </configuration>
                      <executions>
                          <execution>
                              <goals>
                                  <goal>package</goal>
                              </goals>
                          </execution>
                      </executions>
                  </plugin>
                  ...
              </plugins>
          </build>
      </profile>

The bootable-jar profile is activate by default, and when built the WildFly bootable jar file is named opentelemetry-tracing-bootable.jar, and may be found in the target directory.

Procedure
  1. Ensure the bootable jar is built.

    $ mvn clean clean package
  2. Start the WildFly bootable jar use the WildFly Maven Plugin start-jar goal.

    $ mvn wildfly:start-jar
    Note

    You may also start the bootable jar without Maven, using the java command.

    $ java -jar target/opentelemetry-tracing-bootable.jar
  3. Run the integration tests use the verify goal, with the integration-testing profile activated.

    $ mvn verify -Pintegration-testing
  4. Shut down the WildFly bootable jar use the WildFly Maven Plugin shutdown goal.

    $ mvn wildfly:shutdown

Building and running the quickstart application with OpenShift

Build the WildFly Source-to-Image (S2I) Quickstart to OpenShift with Helm Charts

On OpenShift, the S2I build with Apache Maven uses an openshift Maven profile to provision a WildFly server, deploy and run the quickstart in OpenShift environment.

The server provisioning functionality is provided by the WildFly Maven Plugin, and you may find its configuration in the quickstart pom.xml:

        <profile>
            <id>openshift</id>
            <build>
                <plugins>
                    <plugin>
                        <groupId>org.wildfly.plugins</groupId>
                        <artifactId>wildfly-maven-plugin</artifactId>
                        <configuration>
                            <discover-provisioning-info>
                                <version>${version.server}</version>
                                <context>cloud</context>
                            </discover-provisioning-info>
                            <add-ons>...</add-ons>
                        </configuration>
                        <executions>
                            <execution>
                                <goals>
                                    <goal>package</goal>
                                </goals>
                            </execution>
                        </executions>
                    </plugin>
                    ...
                </plugins>
            </build>
        </profile>

You may note that unlike the provisioned-server profile it uses the cloud context which enables a configuration tuned for OpenShift environment.

The plugin uses WildFly Glow to discover the feature packs and layers required to run the application, and provisions a server containing those layers.

If you get an error or the server is missing some functionality which cannot be auto-discovered, you can download the WildFly Glow CLI and run the following command to see more information about what add-ons are available:

wildfly-glow show-add-ons

Getting Started with WildFly for OpenShift and Helm Charts

This section contains the basic instructions to build and deploy this quickstart to WildFly for OpenShift or WildFly for OpenShift Online using Helm Charts.

Prerequisites

  • You must be logged in OpenShift and have an oc client to connect to OpenShift

  • Helm must be installed to deploy the backend on OpenShift.

Once you have installed Helm, you need to add the repository that provides Helm Charts for WildFly.

$ helm repo add wildfly https://docs.wildfly.org/wildfly-charts/
"wildfly" has been added to your repositories
$ helm search repo wildfly
NAME                    CHART VERSION   APP VERSION     DESCRIPTION
wildfly/wildfly         ...             ...            Build and Deploy WildFly applications on OpenShift
wildfly/wildfly-common  ...             ...            A library chart for WildFly-based applications
Install OpenTelemetry Collector on OpenShift

The functionality of this quickstart depends on a running instance of the OpenTelemetry Collector.

To deploy and configure the OpenTelemetry Collector, you will need to apply a set of configurations to your OpenShift cluster, to configure the OpenTelemetry Collector as well as any external routes needed:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: collector-config
data:
  collector.yml: |
    receivers:
      otlp:
        protocols:
          grpc:
          http:
    processors:
    exporters:
      logging:
        verbosity: detailed
    service:
      pipelines:
        traces:
          receivers: [otlp]
          processors: []
          exporters: [logging]
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: opentelemetrycollector
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: opentelemetrycollector
  template:
    metadata:
      labels:
        app.kubernetes.io/name: opentelemetrycollector
    spec:
      containers:
        - name: otelcol
          args:
            - --config=/conf/collector.yml
          image: otel/opentelemetry-collector:0.89.0
          volumeMounts:
            - mountPath: /conf
              name: collector-config
      volumes:
        - configMap:
            items:
              - key: collector.yml
                path: collector.yml
            name: collector-config
          name: collector-config
---
apiVersion: v1
kind: Service
metadata:
  name: opentelemetrycollector
spec:
  ports:
    - name: otlp-grpc
      port: 4317
      protocol: TCP
      targetPort: 4317
    - name: otlp-http
      port: 4318
      protocol: TCP
      targetPort: 4318
  selector:
    app.kubernetes.io/name: opentelemetrycollector
  type: ClusterIP
---
apiVersion: route.openshift.io/v1
kind: Route
metadata:
  name: otelcol-otlp-grpc
  labels:
    app.kubernetes.io/name: microprofile
spec:
  port:
    targetPort: otlp-grpc
  to:
    kind: Service
    name: opentelemetrycollector
  tls:
    termination: edge
    insecureEdgeTerminationPolicy: Redirect
  wildcardPolicy: None
---
apiVersion: route.openshift.io/v1
kind: Route
metadata:
  name: otelcol-otlp-http
  labels:
    app.kubernetes.io/name: microprofile
spec:
  port:
    targetPort: otlp-http
  to:
    kind: Service
    name: opentelemetrycollector
  tls:
    termination: edge
    insecureEdgeTerminationPolicy: Redirect
  wildcardPolicy: None

To make things simpler, you can find these commands in charts/opentelemetry-collector-openshift.yaml, and to apply them run the following command in your terminal:

$ oc apply -f charts/opentelemetry-collector-openshift.yaml
Note

When done with the quickstart, the oc delete -f charts/opentelemetry-collector-openshift.yaml command may be used to revert the applied changes.

Deploy the WildFly Source-to-Image (S2I) Quickstart to OpenShift with Helm Charts

Log in to your OpenShift instance using the oc login command. The backend will be built and deployed on OpenShift with a Helm Chart for WildFly.

Navigate to the root directory of this quickstart and run the following command:

$ helm install opentelemetry-tracing -f charts/helm.yaml wildfly/wildfly --wait --timeout=10m0s 
NAME: opentelemetry-tracing
...
STATUS: deployed
REVISION: 1

This command will return once the application has successfully deployed. In case of a timeout, you can check the status of the application with the following command in another terminal:

oc get deployment opentelemetry-tracing

The Helm Chart for this quickstart contains all the information to build an image from the source code using S2I on Java 17:

build:
  uri: https://github.com/wildfly/quickstart.git
  ref: main
  contextDir: opentelemetry-tracing
deploy:
  replicas: 1
  env:
    - name: OTEL_COLLECTOR_HOST
      value: "opentelemetrycollector"

This will create a new deployment on OpenShift and deploy the application.

If you want to see all the configuration elements to customize your deployment you can use the following command:

$ helm show readme wildfly/wildfly

Get the URL of the route to the deployment.

$ oc get route opentelemetry-tracing -o jsonpath="{.spec.host}"

Access the application in your web browser using the displayed URL.

Run the Integration Tests with OpenShift

The integration tests included with this quickstart, which verify that the quickstart runs correctly, may also be run with the quickstart running on OpenShift.

Note

The integration tests expect a deployed application, so make sure you have deployed the quickstart on OpenShift before you begin.

Run the integration tests using the following command to run the verify goal with the integration-testing profile activated and the proper URL:

$ mvn verify -Pintegration-testing -Dserver.host=https://$(oc get route opentelemetry-tracing --template='{{ .spec.host }}') 
Note

The tests are using SSL to connect to the quickstart running on OpenShift. So you need the certificates to be trusted by the machine the tests are run from.

Undeploy the WildFly Source-to-Image (S2I) Quickstart from OpenShift with Helm Charts

$ helm uninstall opentelemetry-tracing

Building and running the quickstart application with Kubernetes

Build the WildFly Quickstart to Kubernetes with Helm Charts

For Kubernetes, the build with Apache Maven uses an openshift Maven profile to provision a WildFly server, suitable for running on Kubernetes.

The server provisioning functionality is provided by the WildFly Maven Plugin, and you may find its configuration in the quickstart pom.xml:

        <profile>
            <id>openshift</id>
            <build>
                <plugins>
                    <plugin>
                        <groupId>org.wildfly.plugins</groupId>
                        <artifactId>wildfly-maven-plugin</artifactId>
                        <configuration>
                            <discover-provisioning-info>
                                <version>${version.server}</version>
                                <context>cloud</context>
                            </discover-provisioning-info>
                            <add-ons>...</add-ons>
                        </configuration>
                        <executions>
                            <execution>
                                <goals>
                                    <goal>package</goal>
                                </goals>
                            </execution>
                        </executions>
                    </plugin>
                    ...
                </plugins>
            </build>
        </profile>

You may note that unlike the provisioned-server profile it uses the cloud context which enables a configuration tuned for Kubernetes environment.

The plugin uses WildFly Glow to discover the feature packs and layers required to run the application, and provisions a server containing those layers.

If you get an error or the server is missing some functionality which cannot be auto-discovered, you can download the WildFly Glow CLI and run the following command to see more information about what add-ons are available:

wildfly-glow show-add-ons

Getting Started with Kubernetes and Helm Charts

This section contains the basic instructions to build and deploy this quickstart to Kubernetes using Helm Charts.

Install Kubernetes

In this example we are using Minikube as our Kubernetes provider. See the Minikube Getting Started guide for how to install it. After installing it, we start it with 4GB of memory.

minikube start --memory='4gb'

The above command should work if you have Docker installed on your machine. If, you are using Podman instead of Docker, you will also need to pass in --driver=podman, as covered in the Minikube documentation.

Once Minikube has started, we need to enable its registry since that is where we will push the image needed to deploy the quickstart, and where we will tell the Helm charts to download it from.

minikube addons enable registry

In order to be able to push images to the registry we need to make it accessible from outside Kubernetes. How we do this depends on your operating system. All the below examples will expose it at localhost:5000

# On Mac:
docker run --rm -it --network=host alpine ash -c "apk add socat && socat TCP-LISTEN:5000,reuseaddr,fork TCP:$(minikube ip):5000"

# On Linux:
kubectl port-forward --namespace kube-system service/registry 5000:80 &

# On Windows:
kubectl port-forward --namespace kube-system service/registry 5000:80
docker run --rm -it --network=host alpine ash -c "apk add socat && socat TCP-LISTEN:5000,reuseaddr,fork TCP:host.docker.internal:5000"

Prerequisites

  • Helm must be installed to deploy the backend on Kubernetes.

Once you have installed Helm, you need to add the repository that provides Helm Charts for WildFly.

$ helm repo add wildfly https://docs.wildfly.org/wildfly-charts/
"wildfly" has been added to your repositories
$ helm search repo wildfly
NAME                    CHART VERSION   APP VERSION     DESCRIPTION
wildfly/wildfly         ...             ...            Build and Deploy WildFly applications on OpenShift
wildfly/wildfly-common  ...             ...            A library chart for WildFly-based applications
Install OpenTelemetry Collector on Kubernetes

The functionality of this quickstart depends on a running instance of the OpenTelemetry Collector.

To deploy and configure the OpenTelemetry Collector, you will need to apply a set of configurations to your Kubernetes cluster, to configure the OpenTelemetry Collector as well as any external routes needed:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: collector-config
data:
  collector.yml: |
    receivers:
      otlp:
        protocols:
          grpc:
          http:
    processors:
    exporters:
      logging:
        verbosity: detailed
    service:
      pipelines:
        traces:
          receivers: [otlp]
          processors: []
          exporters: [logging]
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: opentelemetrycollector
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: opentelemetrycollector
  template:
    metadata:
      labels:
        app.kubernetes.io/name: opentelemetrycollector
    spec:
      containers:
        - name: otelcol
          args:
            - --config=/conf/collector.yml
          image: otel/opentelemetry-collector:0.89.0
          volumeMounts:
            - mountPath: /conf
              name: collector-config
      volumes:
        - configMap:
            items:
              - key: collector.yml
                path: collector.yml
            name: collector-config
          name: collector-config
---
apiVersion: v1
kind: Service
metadata:
  name: opentelemetrycollector
spec:
  ports:
    - name: otlp-grpc
      port: 4317
      protocol: TCP
      targetPort: 4317
    - name: otlp-http
      port: 4318
      protocol: TCP
      targetPort: 4318
  selector:
    app.kubernetes.io/name: opentelemetrycollector
  type: ClusterIP
---
apiVersion: route.openshift.io/v1
kind: Route
metadata:
  name: otelcol-otlp-grpc
  labels:
    app.kubernetes.io/name: microprofile
spec:
  port:
    targetPort: otlp-grpc
  to:
    kind: Service
    name: opentelemetrycollector
  tls:
    termination: edge
    insecureEdgeTerminationPolicy: Redirect
  wildcardPolicy: None
---
apiVersion: route.openshift.io/v1
kind: Route
metadata:
  name: otelcol-otlp-http
  labels:
    app.kubernetes.io/name: microprofile
spec:
  port:
    targetPort: otlp-http
  to:
    kind: Service
    name: opentelemetrycollector
  tls:
    termination: edge
    insecureEdgeTerminationPolicy: Redirect
  wildcardPolicy: None

To make things simpler, you can find these commands in charts/opentelemetry-collector-openshift.yaml, and to apply them run the following command in your terminal:

$ kubectl apply -f charts/opentelemetry-collector-kubernetes.yaml
Note

When done with the quickstart, the kubectl delete -f charts/opentelemetry-collector-kubernetes.yaml command may be used to revert the applied changes.

Deploy the WildFly Source-to-Image (S2I) Quickstart to Kubernetes with Helm Charts

The backend will be built and deployed on Kubernetes with a Helm Chart for WildFly.

Navigate to the root directory of this quickstart and run the following commands:

mvn -Popenshift package wildfly:image

This will use the openshift Maven profile we saw earlier to build the application, and create a Docker image containing the WildFly server with the application deployed. The name of the image will be opentelemetry-tracing.

Next we need to tag the image and make it available to Kubernetes. You can push it to a registry like quay.io. In this case we tag as localhost:5000/opentelemetry-tracing:latest and push it to the internal registry in our Kubernetes instance:

# Tag the image
docker tag opentelemetry-tracing localhost:5000/opentelemetry-tracing:latest
# Push the image to the registry
docker push localhost:5000/opentelemetry-tracing:latest

In the below call to helm install which deploys our application to Kubernetes, we are passing in some extra arguments to tweak the Helm build:

  • --set build.enabled=false - This turns off the s2i build for the Helm chart since Kubernetes, unlike OpenShift, does not have s2i. Instead, we are providing the image to use.

  • --set deploy.route.enabled=false - This disables route creation normally performed by the Helm chart. On Kubernetes we will use port-forwards instead to access our application, since routes are an OpenShift specific concept and thus not available on Kubernetes.

  • --set image.name="localhost:5000/opentelemetry-tracing" - This tells the Helm chart to use the image we built, tagged and pushed to Kubernetes' internal registry above.

$ helm install opentelemetry-tracing -f charts/helm.yaml wildfly/wildfly --wait --timeout=10m0s --set build.enabled=false --set deploy.route.enabled=false --set image.name="localhost:5000/opentelemetry-tracing"
NAME: opentelemetry-tracing
...
STATUS: deployed
REVISION: 1

This command will return once the application has successfully deployed. In case of a timeout, you can check the status of the application with the following command in another terminal:

kubectl get deployment opentelemetry-tracing

The Helm Chart for this quickstart contains all the information to build an image from the source code using S2I on Java 17:

build:
  uri: https://github.com/wildfly/quickstart.git
  ref: main
  contextDir: opentelemetry-tracing
deploy:
  replicas: 1
  env:
    - name: OTEL_COLLECTOR_HOST
      value: "opentelemetrycollector"

This will create a new deployment on Kubernetes and deploy the application.

If you want to see all the configuration elements to customize your deployment you can use the following command:

$ helm show readme wildfly/wildfly

To be able to connect to our application running in Kubernetes from outside, we need to set up a port-forward to the opentelemetry-tracing service created for us by the Helm chart.

This service will run on port 8080, and we set up the port forward to also run on port 8080:

kubectl port-forward service/opentelemetry-tracing 8080:8080

The server can now be accessed via http://localhost:8080 from outside Kubernetes. Note that the command to create the port-forward will not return, so it is easiest to run this in a separate terminal.

Run the Integration Tests with Kubernetes

The integration tests included with this quickstart, which verify that the quickstart runs correctly, may also be run with the quickstart running on Kubernetes.

Note

The integration tests expect a deployed application, so make sure you have deployed the quickstart on Kubernetes before you begin.

Run the integration tests using the following command to run the verify goal with the integration-testing profile activated and the proper URL:

$ mvn verify -Pintegration-testing -Dserver.host=http://localhost:8080 

Undeploy the WildFly Source-to-Image (S2I) Quickstart from Kubernetes with Helm Charts

$ helm uninstall opentelemetry-tracing

To stop the port forward you created earlier use:

$ kubectl port-forward service/opentelemetry-tracing 8080:8080

Conclusion

OpenTelemetry Tracing provides the mechanisms for your application to participate in the distributed tracing with minimal effort on the application side. The Jakarta REST resources are always traced by default, but the specification allows you to create individual spans directly with the CDI injection of the io.opentelemetry.api.trace.Tracer.