This receiver allows to use monitors.
Monitors collect metrics from the host system and services. They are configured under the monitors list in the agent config. For application-specific monitors, you can define discovery rules in your monitor configuration. A separate monitor instance is created for each discovered instance of applications that match a discovery rule. See Auto Discovery for more information.
Many of the monitors are built around collectd, an open source third-party monitor, and use it to collect metrics. Some other monitors do not use collectd. However, either type is configured in the same way.
For a list of supported monitors and their configurations, see Monitor Config.
The agent is primarily intended to monitor services/applications running on the same host as the agent. This is in keeping with the collectd model. The main issue with monitoring services on other hosts is that the host dimension that collectd sets on all metrics will currently get set to the hostname of the machine that the agent is running on. This allows everything to have a consistent host dimension so that metrics can be matched to a specific machine during metric analysis.
See the migration guide for more information about migrating from the Smart Agent to the Splunk Distribution of the OpenTelemetry Collector.
Beta: All Smart Agent monitors are supported by Splunk. Configuration and behavior may change without notice.
For each Smart Agent
monitor
you want to add to the Collector, add a smartagent
receiver configuration block. Once configured in the Collector, each
smartagent
receiver acts as a drop-in replacement for its corresponding Smart Agent monitor.
- Put any Smart Agent or collectd configuration into the global Smart Agent Extension section of your Collector configuration.
- Instead of using
discoveryRule
, use the Collector's Receiver Creator and Observer extensions. - To replace or modify metrics, use Collector processors.
- If you have a monitor that sends events (e.g.
kubernetes-events
,nagios
,processlist
, and sometelegraf
monitors liketelegraf/exec
), add it to alogs
pipeline that uses a SignalFx exporter. It's recommended, and in the case of the Processlist monitor required, to put into the same pipeline a Resource Detection processor, which will add host information and other useful dimensions to the events. An example is provided below. - If you have a monitor that updates dimension properties or tags, for example
ecs-metadata
,heroku-metadata
,kubernetes-cluster
,openshift-cluster
,postgresql
, orsql
, put the name of your SignalFx exporter in itsdimensionClients
field in the Collector's SignalFx receiver configuration block. If you don't specify any exporters in this array field, the receiver attempts to use the Collector pipeline to which it's connected. If the next element of the pipeline isn't compatible with updating dimensions, and if you configured a single SignalFx exporter, the receiver uses that SignalFx exporter. If you don't require dimension updates, you can specify the empty array[]
to disable it.
Example:
receivers:
smartagent/postgresql:
type: postgresql
host: mypostgresinstance
port: 5432
dimensionClients:
- signalfx # references the SignalFx Exporter configured below
smartagent/processlist:
type: processlist
smartagent/kafka:
type: collectd/kafka
host: mykafkabroker
port: 7099
clusterName: mykafkacluster
intervalSeconds: 5
processors:
resourcedetection:
detectors:
- system
exporters:
signalfx:
access_token: "${SIGNALFX_ACCESS_TOKEN}"
realm: us1
sapm:
access_token: "${SIGNALFX_ACCESS_TOKEN}"
endpoint: https://ingest.us1.signalfx.com/v2/trace
service:
pipelines:
metrics:
receivers:
- smartagent/postgresql
- smartagent/kafka
processors:
- resourcedetection
exporters:
- signalfx
logs:
receivers:
- smartagent/processlist
processors:
- resourcedetection
exporters:
- signalfx
traces:
receivers:
- otlp
processors:
- resourcedetection
exporters:
- sapm