chore(NA): bump version to 7.17.27 #202808
Merged
checks-reporter / X-Pack Chrome Functional tests / Group 12
succeeded
Dec 4, 2024 in 26m 42s
node scripts/functional_tests --bail --kibana-install-dir /opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1733335550727157004/elastic/kibana-pull-request/kibana-build-xpack --include-tag ciGroup12
Details
[truncated]
pendent plugin setup complete - Starting ManifestTask
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [.alerts-observability.metrics.alerts-mappings]
│ proc [kibana] log [18:33:42.124] [info][plugins][ruleRegistry] Installed resources for index .alerts-observability.apm.alerts
│ proc [kibana] log [18:33:42.171] [info][plugins][ruleRegistry] Installed resources for index .alerts-observability.metrics.alerts
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.kibana-event-log-7.17.27-snapshot-template] for index patterns [.kibana-event-log-7.17.27-snapshot-*]
│ info [o.e.c.m.MetadataCreateIndexService] [ftr] [.kibana-event-log-7.17.27-snapshot-000001] creating index, cause [api], templates [.kibana-event-log-7.17.27-snapshot-template], shards [1]/[1]
│ info [o.e.c.r.a.AllocationService] [ftr] updating number_of_replicas to [0] for indices [.kibana-event-log-7.17.27-snapshot-000001]
│ info [o.e.x.i.IndexLifecycleTransition] [ftr] moving index [.kibana-event-log-7.17.27-snapshot-000001] from [null] to [{"phase":"new","action":"complete","name":"complete"}] in policy [kibana-event-log-policy]
│ info [o.e.c.r.a.AllocationService] [ftr] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.kibana-event-log-7.17.27-snapshot-000001][0]]]).
│ info [o.e.x.i.IndexLifecycleTransition] [ftr] moving index [.kibana-event-log-7.17.27-snapshot-000001] from [{"phase":"new","action":"complete","name":"complete"}] to [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] in policy [kibana-event-log-policy]
│ info [o.e.x.i.IndexLifecycleTransition] [ftr] moving index [.kibana-event-log-7.17.27-snapshot-000001] from [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] to [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}] in policy [kibana-event-log-policy]
│ proc [kibana] log [18:33:43.089] [info][chromium][plugins][reporting] Browser executable: /opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1733335550727157004/elastic/kibana-pull-request/kibana-build-xpack/x-pack/plugins/reporting/chromium/headless_shell-linux_x64/headless_shell
│ proc [kibana] log [18:33:43.107] [info][plugins][reporting][store] Creating ILM policy for managing reporting indices: kibana-reporting
│ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [kibana-reporting]
│ proc [kibana] log [18:33:43.874] [info][0][1][endpoint:metadata-check-transforms-task:0][plugins][securitySolution] no endpoint metadata transforms found
│ info [o.e.c.m.MetadataMappingService] [ftr] [.kibana_7.17.27_001/qb4FxBalSzG-KpTmBb5bRg] update_mapping [_doc]
│ info [o.e.c.m.MetadataCreateIndexService] [ftr] [.ds-ilm-history-5-2024.12.04-000001] creating index, cause [initialize_data_stream], templates [ilm-history], shards [1]/[0]
│ info [o.e.c.m.MetadataCreateDataStreamService] [ftr] adding data stream [ilm-history-5] with write index [.ds-ilm-history-5-2024.12.04-000001], backing indices [], and aliases []
│ info [o.e.x.i.IndexLifecycleTransition] [ftr] moving index [.ds-ilm-history-5-2024.12.04-000001] from [null] to [{"phase":"new","action":"complete","name":"complete"}] in policy [ilm-history-ilm-policy]
│ info [o.e.c.r.a.AllocationService] [ftr] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.ds-ilm-history-5-2024.12.04-000001][0]]]).
│ info [o.e.x.i.IndexLifecycleTransition] [ftr] moving index [.ds-ilm-history-5-2024.12.04-000001] from [{"phase":"new","action":"complete","name":"complete"}] to [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] in policy [ilm-history-ilm-policy]
│ info [o.e.x.i.IndexLifecycleTransition] [ftr] moving index [.ds-ilm-history-5-2024.12.04-000001] from [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] to [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}] in policy [ilm-history-ilm-policy]
│ proc [kibana] log [18:33:49.612] [info][status] Kibana is now available (was degraded)
│ info Only running suites which are compatible with ES version 7.17.27
│ info Only running suites (and their sub-suites) if they include the tag(s): [ 'ciGroup12' ]
│ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] updated role [system_indices_superuser]
│ info [o.e.x.s.a.u.TransportPutUserAction] [ftr] updated user [system_indices_superuser]
│ info [o.e.x.s.a.u.TransportPutUserAction] [ftr] added user [test_user]
│ info Only running suites which are compatible with ES version 7.17.27
│ info Only running suites (and their sub-suites) if they include the tag(s): [ 'ciGroup12' ]
│ info Starting tests
│ warn debug logs are being captured, only error logs will be written to the console
│
└-: task_manager
└-> "before all" hook: beforeTestSuite.trigger in "task_manager"
└-: health
└-> "before all" hook: beforeTestSuite.trigger for "should return basic configuration of task manager"
└-> should return basic configuration of task manager
└-> "before each" hook: global before each for "should return basic configuration of task manager"
└- ✓ pass (18ms)
└-> should return the task manager workload
└-> "before each" hook: global before each for "should return the task manager workload"
└- ✓ pass (5.1s)
└-> should return a breakdown of idleTasks in the task manager workload
└-> "before each" hook: global before each for "should return a breakdown of idleTasks in the task manager workload"
└- ✓ pass (11ms)
└-> should return an estimation of task manager capacity
└-> "before each" hook: global before each for "should return an estimation of task manager capacity"
└- ✓ pass (11ms)
└-> should return the task manager runtime stats
└-> "before each" hook: global before each for "should return the task manager runtime stats"
└- ✓ pass (43ms)
└-> "after all" hook: afterTestSuite.trigger for "should return the task manager runtime stats"
└-: scheduling and running tasks
└-> "before all" hook: beforeTestSuite.trigger for "should support middleware"
└-> should support middleware
└-> "before each" hook: global before each for "should support middleware"
└-> "before each" hook for "should support middleware"
└-> "before each" hook for "should support middleware"
└- ✓ pass (2.1s)
└-> should remove non-recurring tasks after they complete
└-> "before each" hook: global before each for "should remove non-recurring tasks after they complete"
└-> "before each" hook for "should remove non-recurring tasks after they complete"
└-> "before each" hook for "should remove non-recurring tasks after they complete"
└- ✓ pass (2.1s)
└-> should use a given ID as the task document ID
└-> "before each" hook: global before each for "should use a given ID as the task document ID"
└-> "before each" hook for "should use a given ID as the task document ID"
└-> "before each" hook for "should use a given ID as the task document ID"
└- ✓ pass (12ms)
└-> should allow a task with a given ID to be scheduled multiple times
└-> "before each" hook: global before each for "should allow a task with a given ID to be scheduled multiple times"
└-> "before each" hook for "should allow a task with a given ID to be scheduled multiple times"
└-> "before each" hook for "should allow a task with a given ID to be scheduled multiple times"
└- ✓ pass (23ms)
└-> should reschedule if task errors
└-> "before each" hook: global before each for "should reschedule if task errors"
└-> "before each" hook for "should reschedule if task errors"
└-> "before each" hook for "should reschedule if task errors"
└- ✓ pass (571ms)
└-> should schedule the retry of recurring tasks to run at the next schedule when they time out
└-> "before each" hook: global before each for "should schedule the retry of recurring tasks to run at the next schedule when they time out"
└-> "before each" hook for "should schedule the retry of recurring tasks to run at the next schedule when they time out"
└-> "before each" hook for "should schedule the retry of recurring tasks to run at the next schedule when they time out"
└- ✓ pass (2.6s)
└-> should reschedule if task returns runAt
└-> "before each" hook: global before each for "should reschedule if task returns runAt"
└-> "before each" hook for "should reschedule if task returns runAt"
└-> "before each" hook for "should reschedule if task returns runAt"
└- ✓ pass (3.1s)
└-> should reschedule if task has an interval
└-> "before each" hook: global before each for "should reschedule if task has an interval"
└-> "before each" hook for "should reschedule if task has an interval"
└-> "before each" hook for "should reschedule if task has an interval"
└- ✓ pass (2.1s)
└-> should support the deprecated interval field
└-> "before each" hook: global before each for "should support the deprecated interval field"
└-> "before each" hook for "should support the deprecated interval field"
└-> "before each" hook for "should support the deprecated interval field"
└- ✓ pass (2.1s)
└-> should return a task run result when asked to run a task now
└-> "before each" hook: global before each for "should return a task run result when asked to run a task now"
└-> "before each" hook for "should return a task run result when asked to run a task now"
└-> "before each" hook for "should return a task run result when asked to run a task now"
└- ✓ pass (3.3s)
└-> should prioritize tasks which are called using runNow
└-> "before each" hook: global before each for "should prioritize tasks which are called using runNow"
└-> "before each" hook for "should prioritize tasks which are called using runNow"
└-> "before each" hook for "should prioritize tasks which are called using runNow"
└- ✓ pass (19.1s)
└-> should only run as many instances of a task as its maxConcurrency will allow
└-> "before each" hook: global before each for "should only run as many instances of a task as its maxConcurrency will allow"
└-> "before each" hook for "should only run as many instances of a task as its maxConcurrency will allow"
└-> "before each" hook for "should only run as many instances of a task as its maxConcurrency will allow"
└- ✓ pass (7.5s)
└-> should return a task run error result when RunNow is called at a time that would cause the task to exceed its maxConcurrency
└-> "before each" hook: global before each for "should return a task run error result when RunNow is called at a time that would cause the task to exceed its maxConcurrency"
└-> "before each" hook for "should return a task run error result when RunNow is called at a time that would cause the task to exceed its maxConcurrency"
└-> "before each" hook for "should return a task run error result when RunNow is called at a time that would cause the task to exceed its maxConcurrency"
└- ✓ pass (5.7s)
└-> should return a task run error result when running a task now fails
└-> "before each" hook: global before each for "should return a task run error result when running a task now fails"
└-> "before each" hook for "should return a task run error result when running a task now fails"
└-> "before each" hook for "should return a task run error result when running a task now fails"
└- ✓ pass (3.0s)
└-> should increment attempts when task fails on markAsRunning
└-> "before each" hook: global before each for "should increment attempts when task fails on markAsRunning"
└-> "before each" hook for "should increment attempts when task fails on markAsRunning"
└-> "before each" hook for "should increment attempts when task fails on markAsRunning"
└- ✓ pass (10.1s)
└-> should return a task run error result when trying to run a non-existent task
└-> "before each" hook: global before each for "should return a task run error result when trying to run a non-existent task"
└-> "before each" hook for "should return a task run error result when trying to run a non-existent task"
└-> "before each" hook for "should return a task run error result when trying to run a non-existent task"
└- ✓ pass (17ms)
└-> should return a task run error result when trying to run a task now which is already running
└-> "before each" hook: global before each for "should return a task run error result when trying to run a task now which is already running"
└-> "before each" hook for "should return a task run error result when trying to run a task now which is already running"
└-> "before each" hook for "should return a task run error result when trying to run a task now which is already running"
└- ✓ pass (5.3s)
└-> should allow a failed task to be rerun using runNow
└-> "before each" hook: global before each for "should allow a failed task to be rerun using runNow"
└-> "before each" hook for "should allow a failed task to be rerun using runNow"
└-> "before each" hook for "should allow a failed task to be rerun using runNow"
└- ✓ pass (4.9s)
└-> should run tasks in parallel, allowing for long running tasks along side faster tasks
└-> "before each" hook: global before each for "should run tasks in parallel, allowing for long running tasks along side faster tasks"
└-> "before each" hook for "should run tasks in parallel, allowing for long running tasks along side faster tasks"
└-> "before each" hook for "should run tasks in parallel, allowing for long running tasks along side faster tasks"
└- ✓ pass (9.3s)
└-> should mark non-recurring task as failed if task is still running but maxAttempts has been reached
└-> "before each" hook: global before each for "should mark non-recurring task as failed if task is still running but maxAttempts has been reached"
└-> "before each" hook for "should mark non-recurring task as failed if task is still running but maxAttempts has been reached"
└-> "before each" hook for "should mark non-recurring task as failed if task is still running but maxAttempts has been reached"
└- ✓ pass (20.0s)
└-> should continue claiming recurring task even if maxAttempts has been reached
└-> "before each" hook: global before each for "should continue claiming recurring task even if maxAttempts has been reached"
└-> "before each" hook for "should continue claiming recurring task even if maxAttempts has been reached"
└-> "before each" hook for "should continue claiming recurring task even if maxAttempts has been reached"
└- ✓ pass (14.9s)
└-> "after all" hook for "should continue claiming recurring task even if maxAttempts has been reached"
└-> "after all" hook: afterTestSuite.trigger for "should continue claiming recurring task even if maxAttempts has been reached"
└-: removed task types
└-> "before all" hook: beforeTestSuite.trigger for "should successfully schedule registered tasks and mark unregistered tasks as unrecognized"
└-> "before all" hook for "should successfully schedule registered tasks and mark unregistered tasks as unrecognized"
└-> should successfully schedule registered tasks and mark unregistered tasks as unrecognized
└-> "before each" hook: global before each for "should successfully schedule registered tasks and mark unregistered tasks as unrecognized"
└- ✓ pass (15.9s)
└-> "after all" hook for "should successfully schedule registered tasks and mark unregistered tasks as unrecognized"
└-> "after all" hook: afterTestSuite.trigger for "should successfully schedule registered tasks and mark unregistered tasks as unrecognized"
└-> "after all" hook: afterTestSuite.trigger in "task_manager"
│
│27 passing (2.0m)
│
│ proc [kibana] log [18:36:26.615] [info][plugins-system][standard] Stopping all plugins.
│ proc [kibana] log [18:36:26.616] [info][kibana-monitoring][monitoring][monitoring][plugins] Monitoring stats collection is stopped
│ proc [kibana] log [18:36:26.619] [info][eventLog][plugins] event logged: {"@timestamp":"2024-12-04T18:36:26.618Z","event":{"provider":"eventLog","action":"stopping"},"message":"eventLog stopping","ecs":{"version":"1.8.0"},"kibana":{"server_uuid":"5b2de169-2785-441b-ae8c-186a1936b17d","version":"7.17.27"}}
│ info [kibana] exited with null after 183.0 seconds
│ info [es] stopping node ftr
│ info [o.e.x.m.p.NativeController] [ftr] Native controller process has stopped - no new native processes can be started
│ info [o.e.n.Node] [ftr] stopping ...
│ info [o.e.x.w.WatcherService] [ftr] stopping watch service, reason [shutdown initiated]
│ info [o.e.x.w.WatcherLifeCycleService] [ftr] watcher has stopped and shutdown
│ info [o.e.n.Node] [ftr] stopped
│ info [o.e.n.Node] [ftr] closing ...
│ info [o.e.n.Node] [ftr] closed
│ info [es] stopped
│ info [es] no debug files found, assuming es did not write any
│ info [es] cleanup complete
--- [5/5] Running x-pack/test/saved_object_tagging/api_integration/tagging_api/config.ts
info Installing from snapshot
│ info version: 7.17.27
│ info install path: /opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1733335550727157004/elastic/kibana-pull-request/kibana/.es/job-kibana-default-ciGroup12-cluster-ftr
│ info license: trial
│ info Downloading snapshot manifest from https://storage.googleapis.com/kibana-ci-es-snapshots-daily/7.17.27/archives/20241204-170352_2dc764de/manifest.json
│ info verifying cache of https://storage.googleapis.com/kibana-ci-es-snapshots-daily/7.17.27/archives/20241204-170352_2dc764de/elasticsearch-7.17.27-SNAPSHOT-linux-x86_64.tar.gz
│ info etags match, reusing cache from 2024-12-04T18:12:34.180Z
│ info extracting /opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1733335550727157004/elastic/kibana-pull-request/kibana/.es/cache/elasticsearch-7.17.27-SNAPSHOT-linux-x86_64.tar.gz
│ info extracted to /opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1733335550727157004/elastic/kibana-pull-request/kibana/.es/job-kibana-default-ciGroup12-cluster-ftr
│ info created /opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1733335550727157004/elastic/kibana-pull-request/kibana/.es/job-kibana-default-ciGroup12-cluster-ftr/ES_TMPDIR
│ info setting secure setting bootstrap.password to changeme
info [es] starting node ftr on port 9220
info Starting
│ERROR Dec 04, 2024 6:36:36 PM sun.util.locale.provider.LocaleProviderAdapter <clinit>
│ WARNING: COMPAT locale provider will be removed in a future release
│
│ info [o.e.n.Node] [ftr] version[7.17.27-SNAPSHOT], pid[7070], build[default/tar/2dc764dee61bf435614bcb4d6825752187f10d99/2024-12-04T16:58:54.909862393Z], OS[Linux/5.15.0-1071-gcp/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/22.0.2/22.0.2+9-70]
│ info [o.e.n.Node] [ftr] JVM home [/opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1733335550727157004/elastic/kibana-pull-request/kibana/.es/job-kibana-default-ciGroup12-cluster-ftr/jdk], using bundled JDK [true]
│ info [o.e.n.Node] [ftr] JVM arguments [-Xshare:auto, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -XX:+ShowCodeDetailsInExceptionMessages, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j2.formatMsgNoLookups=true, -Djava.locale.providers=SPI,COMPAT, --add-opens=java.base/java.io=ALL-UNNAMED, -Djava.security.manager=allow, -XX:+UseG1GC, -Djava.io.tmpdir=/opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1733335550727157004/elastic/kibana-pull-request/kibana/.es/job-kibana-default-ciGroup12-cluster-ftr/ES_TMPDIR, -XX:+HeapDumpOnOutOfMemoryError, -XX:+ExitOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -XX:+UnlockDiagnosticVMOptions, -XX:G1NumCollectionsKeepPinned=10000000, -Xms1536m, -Xmx1536m, -XX:MaxDirectMemorySize=805306368, -XX:G1HeapRegionSize=4m, -XX:InitiatingHeapOccupancyPercent=30, -XX:G1ReservePercent=15, -Des.path.home=/opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1733335550727157004/elastic/kibana-pull-request/kibana/.es/job-kibana-default-ciGroup12-cluster-ftr, -Des.path.conf=/opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1733335550727157004/elastic/kibana-pull-request/kibana/.es/job-kibana-default-ciGroup12-cluster-ftr/config, -Des.distribution.flavor=default, -Des.distribution.type=tar, -Des.bundled_jdk=true]
│ info [o.e.n.Node] [ftr] version [7.17.27-SNAPSHOT] is a pre-release version of Elasticsearch and is not suitable for production
│ info [o.e.p.PluginsService] [ftr] loaded module [aggs-matrix-stats]
│ info [o.e.p.PluginsService] [ftr] loaded module [analysis-common]
│ info [o.e.p.PluginsService] [ftr] loaded module [constant-keyword]
│ info [o.e.p.PluginsService] [ftr] loaded module [frozen-indices]
│ info [o.e.p.PluginsService] [ftr] loaded module [ingest-common]
│ info [o.e.p.PluginsService] [ftr] loaded module [ingest-geoip]
│ info [o.e.p.PluginsService] [ftr] loaded module [ingest-user-agent]
│ info [o.e.p.PluginsService] [ftr] loaded module [kibana]
│ info [o.e.p.PluginsService] [ftr] loaded module [lang-expression]
│ info [o.e.p.PluginsService] [ftr] loaded module [lang-mustache]
│ info [o.e.p.PluginsService] [ftr] loaded module [lang-painless]
│ info [o.e.p.PluginsService] [ftr] loaded module [legacy-geo]
│ info [o.e.p.PluginsService] [ftr] loaded module [mapper-extras]
│ info [o.e.p.PluginsService] [ftr] loaded module [mapper-version]
│ info [o.e.p.PluginsService] [ftr] loaded module [parent-join]
│ info [o.e.p.PluginsService] [ftr] loaded module [percolator]
│ info [o.e.p.PluginsService] [ftr] loaded module [rank-eval]
│ info [o.e.p.PluginsService] [ftr] loaded module [reindex]
│ info [o.e.p.PluginsService] [ftr] loaded module [repositories-metering-api]
│ info [o.e.p.PluginsService] [ftr] loaded module [repository-encrypted]
│ info [o.e.p.PluginsService] [ftr] loaded module [repository-url]
│ info [o.e.p.PluginsService] [ftr] loaded module [runtime-fields-common]
│ info [o.e.p.PluginsService] [ftr] loaded module [search-business-rules]
│ info [o.e.p.PluginsService] [ftr] loaded module [searchable-snapshots]
│ info [o.e.p.PluginsService] [ftr] loaded module [snapshot-repo-test-kit]
│ info [o.e.p.PluginsService] [ftr] loaded module [spatial]
│ info [o.e.p.PluginsService] [ftr] loaded module [test-delayed-aggs]
│ info [o.e.p.PluginsService] [ftr] loaded module [test-die-with-dignity]
│ info [o.e.p.PluginsService] [ftr] loaded module [test-error-query]
│ info [o.e.p.PluginsService] [ftr] loaded module [transform]
│ info [o.e.p.PluginsService] [ftr] loaded module [transport-netty4]
│ info [o.e.p.PluginsService] [ftr] loaded module [unsigned-long]
│ info [o.e.p.PluginsService] [ftr] loaded module [vector-tile]
│ info [o.e.p.PluginsService] [ftr] loaded module [vectors]
│ info [o.e.p.PluginsService] [ftr] loaded module [wildcard]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-aggregate-metric]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-analytics]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-async]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-async-search]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-autoscaling]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-ccr]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-core]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-data-streams]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-deprecation]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-enrich]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-eql]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-fleet]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-graph]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-identity-provider]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-ilm]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-logstash]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-ml]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-monitoring]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-ql]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-rollup]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-security]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-shutdown]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-sql]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-stack]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-text-structure]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-voting-only-node]
│ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-watcher]
│ info [o.e.p.PluginsService] [ftr] no plugins loaded
│ info [o.e.e.NodeEnvironment] [ftr] using [1] data paths, mounts [[/opt/local-ssd (/dev/nvme0n1)]], net usable_space [340.4gb], net total_space [368gb], types [ext4]
│ info [o.e.e.NodeEnvironment] [ftr] heap size [1.5gb], compressed ordinary object pointers [true]
│ info [o.e.n.Node] [ftr] node name [ftr], node ID [TP0_XBjhQqiaOf9O72Sv5Q], cluster name [job-kibana-default-ciGroup12-cluster-ftr], roles [transform, data_frozen, master, remote_cluster_client, data, ml, data_content, data_hot, data_warm, data_cold, ingest]
│ info [o.e.x.m.p.l.CppLogMessageHandler] [ftr] [controller/7239] [Main.cc@122] controller (64 bit): Version 7.17.27-SNAPSHOT (Build 46b5118e94a5bc) Copyright (c) 2024 Elasticsearch BV
│ info [o.e.x.s.a.Realms] [ftr] license mode is [trial], currently licensed security realms are [reserved/reserved,file/default_file,native/default_native]
│ info [o.e.x.s.a.s.FileRolesStore] [ftr] parsed [0] roles from file [/opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1733335550727157004/elastic/kibana-pull-request/kibana/.es/job-kibana-default-ciGroup12-cluster-ftr/config/roles.yml]
│ info [o.e.i.g.ConfigDatabases] [ftr] initialized default databases [[GeoLite2-Country.mmdb, GeoLite2-City.mmdb, GeoLite2-ASN.mmdb]], config databases [[]] and watching [/opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1733335550727157004/elastic/kibana-pull-request/kibana/.es/job-kibana-default-ciGroup12-cluster-ftr/config/ingest-geoip] for changes
│ info [o.e.i.g.DatabaseNodeService] [ftr] initialized database registry, using geoip-databases directory [/opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1733335550727157004/elastic/kibana-pull-request/kibana/.es/job-kibana-default-ciGroup12-cluster-ftr/ES_TMPDIR/geoip-databases/TP0_XBjhQqiaOf9O72Sv5Q]
│ info [o.e.t.NettyAllocator] [ftr] creating NettyAllocator with the following configs: [name=elasticsearch_configured, chunk_size=1mb, suggested_max_allocation_size=1mb, factors={es.unsafe.use_netty_default_chunk_and_page_size=false, g1gc_enabled=true, g1gc_region_size=4mb}]
│ info [o.e.i.r.RecoverySettings] [ftr] using rate limit [40mb] with [default=40mb, read=0b, write=0b, max=0b]
│ info [o.e.d.DiscoveryModule] [ftr] using discovery type [single-node] and seed hosts providers [settings]
│ info [o.e.g.DanglingIndicesState] [ftr] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually
│ info [o.e.n.Node] [ftr] initialized
│ info [o.e.n.Node] [ftr] starting ...
│ info [o.e.x.s.c.f.PersistentCache] [ftr] persistent cache index loaded
│ info [o.e.x.d.l.DeprecationIndexingComponent] [ftr] deprecation component started
│ info [o.e.t.TransportService] [ftr] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}
│ info [o.e.x.m.Monitoring] [ftr] creating template [.monitoring-alerts-7] with version [7]
│ info [o.e.x.m.Monitoring] [ftr] creating template [.monitoring-es] with version [7]
│ info [o.e.x.m.Monitoring] [ftr] creating template [.monitoring-kibana] with version [7]
│ info [o.e.x.m.Monitoring] [ftr] creating template [.monitoring-logstash] with version [7]
│ info [o.e.x.m.Monitoring] [ftr] creating template [.monitoring-beats] with version [7]
│ info [o.e.c.c.Coordinator] [ftr] setting initial configuration to VotingConfiguration{TP0_XBjhQqiaOf9O72Sv5Q}
│ info [o.e.c.s.MasterService] [ftr] elected-as-master ([1] nodes joined)[{ftr}{TP0_XBjhQqiaOf9O72Sv5Q}{4uDLKQYMRVq9OLovsn8AIg}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 1, version: 1, delta: master node changed {previous [], current [{ftr}{TP0_XBjhQqiaOf9O72Sv5Q}{4uDLKQYMRVq9OLovsn8AIg}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw}]}
│ info [o.e.c.c.CoordinationState] [ftr] cluster UUID set to [R_vfB3ABTz2VVYxO5PO34w]
│ info [o.e.c.s.ClusterApplierService] [ftr] master node changed {previous [], current [{ftr}{TP0_XBjhQqiaOf9O72Sv5Q}{4uDLKQYMRVq9OLovsn8AIg}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw}]}, term: 1, version: 1, reason: Publication{term=1, version=1}
│ info [o.e.h.AbstractHttpServerTransport] [ftr] publish_address {127.0.0.1:9220}, bound_addresses {[::1]:9220}, {127.0.0.1:9220}
│ info [o.e.n.Node] [ftr] started
│ info [o.e.g.GatewayService] [ftr] recovered [0] indices into cluster_state
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.ml-anomalies-] for index patterns [.ml-anomalies-*]
│ info [o.e.x.s.s.SecurityIndexManager] [ftr] security index does not exist, creating [.security-7] with alias [.security]
│ info [o.e.x.s.s.SecurityIndexManager] [ftr] security index does not exist, creating [.security-7] with alias [.security]
│ info [o.e.x.s.s.SecurityIndexManager] [ftr] security index does not exist, creating [.security-7] with alias [.security]
│ info [o.e.x.s.s.SecurityIndexManager] [ftr] security index does not exist, creating [.security-7] with alias [.security]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.ml-notifications-000002] for index patterns [.ml-notifications-000002]
│ info [o.e.x.s.s.SecurityIndexManager] [ftr] security index does not exist, creating [.security-7] with alias [.security]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.ml-state] for index patterns [.ml-state*]
│ info [o.e.x.s.s.SecurityIndexManager] [ftr] security index does not exist, creating [.security-7] with alias [.security]
│ info [o.e.x.s.s.SecurityIndexManager] [ftr] security index does not exist, creating [.security-7] with alias [.security]
│ info [o.e.x.s.s.SecurityIndexManager] [ftr] security index does not exist, creating [.security-7] with alias [.security]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.ml-stats] for index patterns [.ml-stats-*]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [logs-mappings]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [data-streams-mappings]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [logs-settings]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [metrics-mappings]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [metrics-settings]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [synthetics-settings]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [synthetics-mappings]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [ilm-history] for index patterns [ilm-history-5*]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.watch-history-13] for index patterns [.watcher-history-13*]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.slm-history] for index patterns [.slm-history-5*]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [.deprecation-indexing-mappings]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [.deprecation-indexing-settings]
│ info [o.e.c.m.MetadataCreateIndexService] [ftr] [.security-7] creating index, cause [api], templates [], shards [1]/[0]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [logs] for index patterns [logs-*-*]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [metrics] for index patterns [metrics-*-*]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [synthetics] for index patterns [synthetics-*-*]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.deprecation-indexing-template] for index patterns [.logs-deprecation.*]
│ info [o.e.c.r.a.AllocationService] [ftr] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.security-7][0]]]).
│ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [ml-size-based-ilm-policy]
│ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [logs]
│ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [metrics]
│ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [synthetics]
│ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [7-days-default]
│ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [90-days-default]
│ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [180-days-default]
│ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [30-days-default]
│ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [365-days-default]
│ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [watch-history-ilm-policy]
│ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [ilm-history-ilm-policy]
│ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [slm-history-ilm-policy]
│ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [.deprecation-indexing-ilm-policy]
│ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [.fleet-actions-results-ilm-policy]
│ info [o.e.l.LicenseService] [ftr] license [3696bfff-600b-4897-89d7-9554bc78916b] mode [trial] - valid
│ info [o.e.x.s.a.Realms] [ftr] license mode is [trial], currently licensed security realms are [reserved/reserved,file/default_file,native/default_native]
│ info [o.e.x.s.s.SecurityStatusChangeListener] [ftr] Active license is now [TRIAL]; Security is enabled
│ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [system_indices_superuser]
│ info [o.e.x.s.a.u.TransportPutUserAction] [ftr] added user [system_indices_superuser]
│ info starting [kibana] > /opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1733335550727157004/elastic/kibana-pull-request/kibana-build-xpack/bin/kibana --logging.json=false --server.port=5620 --elasticsearch.hosts=http://localhost:9220 --elasticsearch.username=kibana_system --elasticsearch.password=changeme --data.search.aggs.shardDelay.enabled=true --security.showInsecureClusterWarning=false --telemetry.banner=false --telemetry.sendUsageTo=staging --server.maxPayload=1679958 --plugin-path=/opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1733335550727157004/elastic/kibana-pull-request/kibana/test/common/fixtures/plugins/newsfeed --newsfeed.service.urlRoot=http://localhost:5620 --newsfeed.service.pathTemplate=/api/_newsfeed-FTS-external-service-simulators/kibana/v{VERSION}.json --logging.appenders.deprecation.type=console --logging.appenders.deprecation.layout.type=json --logging.loggers[0].name=elasticsearch.deprecation --logging.loggers[0].level=all --logging.loggers[0].appenders[0]=deprecation --status.allowAnonymous=true --server.uuid=5b2de169-2785-441b-ae8c-186a1936b17d --xpack.maps.showMapsInspectorAdapter=true --xpack.maps.preserveDrawingBuffer=true --xpack.security.encryptionKey="wuGNaIhoMpk5sO4UBxgr3NyW1sFcLgIf" --xpack.encryptedSavedObjects.encryptionKey="DkdXazszSCYexXqz4YktBGHCRkV6hyNK" --xpack.discoverEnhanced.actions.exploreDataInContextMenu.enabled=true --savedObjects.maxImportPayloadBytes=10485760 --xpack.siem.enabled=true --map.proxyElasticMapsServiceInMaps=true --xpack.security.session.idleTimeout=3600000 --telemetry.optIn=true --xpack.fleet.enabled=true --xpack.fleet.agents.pollingRequestTimeout=5000 --xpack.data_enhanced.search.sessions.enabled=true --xpack.data_enhanced.search.sessions.notTouchedTimeout=15s --xpack.data_enhanced.search.sessions.trackingInterval=5s --xpack.data_enhanced.search.sessions.cleanupInterval=5s --xpack.ruleRegistry.write.enabled=true --server.xsrf.disableProtection=true
│ proc [kibana] Kibana is currently running with legacy OpenSSL providers enabled! For details and instructions on how to disable see https://www.elastic.co/guide/en/kibana/7.17/production.html#openssl-legacy-provider
│ proc [kibana] log [18:37:04.777] [info][plugins-service] Plugin "metricsEntities" is disabled.
│ proc [kibana] log [18:37:04.859] [info][server][Preboot][http] http server running at http://localhost:5620
│ proc [kibana] log [18:37:04.903] [warning][config][deprecation] Starting in 8.0, the Kibana logging format will be changing. This may affect you if you are doing any special handling of your Kibana logs, such as ingesting logs into Elasticsearch for further analysis. If you are using the new logging configuration, you are already receiving logs in both old and new formats, and the old format will simply be going away. If you are not yet using the new logging configuration, the log format will change upon upgrade to 8.0. Beginning in 8.0, the format of JSON logs will be ECS-compatible JSON, and the default pattern log format will be configurable with our new logging system. Please refer to the documentation for more information about the new logging format.
│ proc [kibana] log [18:37:04.903] [warning][config][deprecation] Configuring "xpack.fleet.enabled" is deprecated and will be removed in 8.0.0.
│ proc [kibana] log [18:37:04.904] [warning][config][deprecation] You no longer need to configure "xpack.fleet.agents.pollingRequestTimeout".
│ proc [kibana] log [18:37:04.904] [warning][config][deprecation] map.proxyElasticMapsServiceInMaps is deprecated and is no longer used
│ proc [kibana] log [18:37:04.905] [warning][config][deprecation] The default mechanism for Reporting privileges will work differently in future versions, which will affect the behavior of this cluster. Set "xpack.reporting.roles.enabled" to "false" to adopt the future behavior before upgrading.
│ proc [kibana] log [18:37:04.905] [warning][config][deprecation] Setting "security.showInsecureClusterWarning" has been replaced by "xpack.security.showInsecureClusterWarning"
│ proc [kibana] log [18:37:04.906] [warning][config][deprecation] Users are automatically required to log in again after 30 days starting in 8.0. Override this value to change the timeout.
│ proc [kibana] log [18:37:04.906] [warning][config][deprecation] Setting "xpack.siem.enabled" has been replaced by "xpack.securitySolution.enabled"
│ proc [kibana] log [18:37:05.033] [info][plugins-system][standard] Setting up [115] plugins: [newsfeedFixtures,translations,licensing,globalSearch,globalSearchProviders,features,licenseApiGuard,code,usageCollection,xpackLegacy,taskManager,telemetryCollectionManager,telemetryCollectionXpack,kibanaUsageCollection,share,embeddable,uiActionsEnhanced,screenshotMode,banners,telemetry,newsfeed,mapsEms,mapsLegacy,kibanaLegacy,fieldFormats,expressions,dataViews,charts,esUiShared,bfetch,data,savedObjects,presentationUtil,expressionShape,expressionRevealImage,expressionRepeatImage,expressionMetric,expressionImage,customIntegrations,home,searchprofiler,painlessLab,grokdebugger,management,watcher,licenseManagement,advancedSettings,spaces,security,savedObjectsTagging,globalSearchBar,reporting,canvas,lists,ingestPipelines,fileUpload,encryptedSavedObjects,dataEnhanced,cloud,snapshotRestore,eventLog,actions,alerting,triggersActionsUi,transform,stackAlerts,ruleRegistry,visualizations,visTypeXy,visTypeVislib,visTypeVega,visTypeTimelion,visTypeTagcloud,visTypeTable,visTypePie,visTypeMetric,visTypeMarkdown,tileMap,regionMap,expressionTagcloud,expressionMetricVis,console,graph,fleet,indexManagement,remoteClusters,crossClusterReplication,indexLifecycleManagement,dashboard,maps,dashboardMode,dashboardEnhanced,visualize,visTypeTimeseries,rollup,indexPatternFieldEditor,lens,cases,timelines,discover,osquery,observability,discoverEnhanced,dataVisualizer,ml,uptime,securitySolution,infra,upgradeAssistant,monitoring,logstash,enterpriseSearch,apm,savedObjectsManagement,indexPatternManagement]
│ proc [kibana] log [18:37:05.050] [info][plugins][taskManager] TaskManager is identified by the Kibana UUID: 5b2de169-2785-441b-ae8c-186a1936b17d
│ proc [kibana] log [18:37:05.154] [warning][config][plugins][security] Session cookies will be transmitted over insecure connections. This is not recommended.
│ proc [kibana] log [18:37:05.175] [warning][config][plugins][security] Session cookies will be transmitted over insecure connections. This is not recommended.
│ proc [kibana] log [18:37:05.198] [warning][config][plugins][reporting] Generating a random key for xpack.reporting.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.reporting.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
│ proc [kibana] log [18:37:05.213] [info][encryptedSavedObjects][plugins] Hashed 'xpack.encryptedSavedObjects.encryptionKey' for this instance: nnkvE7kjGgidcjXzmLYBbIh4THhRWI1/7fUjAEaJWug=
│ proc [kibana] log [18:37:05.248] [info][plugins][ruleRegistry] Installing common resources shared between all indices
│ proc [kibana] log [18:37:05.708] [info][config][plugins][reporting] Chromium sandbox provides an additional layer of protection, and is supported for Linux Ubuntu 20.04 OS. Automatically enabling Chromium sandbox.
│ proc [kibana] log [18:37:05.947] [info][savedobjects-service] Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations...
│ proc [kibana] log [18:37:05.947] [info][savedobjects-service] Starting saved objects migrations
│ proc [kibana] log [18:37:06.007] [info][savedobjects-service] [.kibana_task_manager] INIT -> CREATE_NEW_TARGET. took: 12ms.
│ proc [kibana] log [18:37:06.012] [info][savedobjects-service] [.kibana] INIT -> CREATE_NEW_TARGET. took: 36ms.
│ info [o.e.c.m.MetadataCreateIndexService] [ftr] [.kibana_task_manager_7.17.27_001] creating index, cause [api], templates [], shards [1]/[1]
│ info [o.e.c.r.a.AllocationService] [ftr] updating number_of_replicas to [0] for indices [.kibana_task_manager_7.17.27_001]
│ info [o.e.c.m.MetadataCreateIndexService] [ftr] [.kibana_7.17.27_001] creating index, cause [api], templates [], shards [1]/[1]
│ info [o.e.c.r.a.AllocationService] [ftr] updating number_of_replicas to [0] for indices [.kibana_7.17.27_001]
│ info [o.e.c.r.a.AllocationService] [ftr] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.kibana_7.17.27_001][0]]]).
│ proc [kibana] log [18:37:06.434] [info][savedobjects-service] [.kibana_task_manager] CREATE_NEW_TARGET -> MARK_VERSION_INDEX_READY. took: 427ms.
│ proc [kibana] log [18:37:06.470] [info][savedobjects-service] [.kibana] CREATE_NEW_TARGET -> MARK_VERSION_INDEX_READY. took: 458ms.
│ proc [kibana] log [18:37:06.559] [info][savedobjects-service] [.kibana_task_manager] MARK_VERSION_INDEX_READY -> DONE. took: 125ms.
│ proc [kibana] log [18:37:06.560] [info][savedobjects-service] [.kibana_task_manager] Migration completed after 565ms
│ proc [kibana] log [18:37:06.588] [info][savedobjects-service] [.kibana] MARK_VERSION_INDEX_READY -> DONE. took: 118ms.
│ proc [kibana] log [18:37:06.589] [info][savedobjects-service] [.kibana] Migration completed after 613ms
│ proc [kibana] log [18:37:06.596] [info][plugins-system][standard] Starting [115] plugins: [newsfeedFixtures,translations,licensing,globalSearch,globalSearchProviders,features,licenseApiGuard,code,usageCollection,xpackLegacy,taskManager,telemetryCollectionManager,telemetryCollectionXpack,kibanaUsageCollection,share,embeddable,uiActionsEnhanced,screenshotMode,banners,telemetry,newsfeed,mapsEms,mapsLegacy,kibanaLegacy,fieldFormats,expressions,dataViews,charts,esUiShared,bfetch,data,savedObjects,presentationUtil,expressionShape,expressionRevealImage,expressionRepeatImage,expressionMetric,expressionImage,customIntegrations,home,searchprofiler,painlessLab,grokdebugger,management,watcher,licenseManagement,advancedSettings,spaces,security,savedObjectsTagging,globalSearchBar,reporting,canvas,lists,ingestPipelines,fileUpload,encryptedSavedObjects,dataEnhanced,cloud,snapshotRestore,eventLog,actions,alerting,triggersActionsUi,transform,stackAlerts,ruleRegistry,visualizations,visTypeXy,visTypeVislib,visTypeVega,visTypeTimelion,visTypeTagcloud,visTypeTable,visTypePie,visTypeMetric,visTypeMarkdown,tileMap,regionMap,expressionTagcloud,expressionMetricVis,console,graph,fleet,indexManagement,remoteClusters,crossClusterReplication,indexLifecycleManagement,dashboard,maps,dashboardMode,dashboardEnhanced,visualize,visTypeTimeseries,rollup,indexPatternFieldEditor,lens,cases,timelines,discover,osquery,observability,discoverEnhanced,dataVisualizer,ml,uptime,securitySolution,infra,upgradeAssistant,monitoring,logstash,enterpriseSearch,apm,savedObjectsManagement,indexPatternManagement]
│ proc [kibana] log [18:37:07.796] [info][monitoring][monitoring][plugins] config sourced from: production cluster
│ proc [kibana] log [18:37:08.997] [info][server][Kibana][http] http server running at http://localhost:5620
│ proc [kibana] log [18:37:09.068] [info][status] Kibana is now degraded
│ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [.alerts-ilm-policy]
│ info [o.e.c.m.MetadataMappingService] [ftr] [.kibana_task_manager_7.17.27_001/k_KdJ8keRgm_jWehlQhviQ] update_mapping [_doc]
│ proc [kibana] log [18:37:09.268] [info][kibana-monitoring][monitoring][monitoring][plugins] Starting monitoring stats collection
│ info [o.e.c.m.MetadataMappingService] [ftr] [.kibana_7.17.27_001/MyPkeO2kSXecDn-vttrwxQ] update_mapping [_doc]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [.alerts-ecs-mappings]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [.alerts-technical-mappings]
│ proc [kibana] log [18:37:09.698] [info][plugins][ruleRegistry] Installed common resources shared between all indices
│ proc [kibana] log [18:37:09.699] [info][plugins][ruleRegistry] Installing resources for index .alerts-observability.uptime.alerts
│ proc [kibana] log [18:37:09.700] [info][plugins][ruleRegistry] Installing resources for index .alerts-observability.logs.alerts
│ proc [kibana] log [18:37:09.700] [info][plugins][ruleRegistry] Installing resources for index .alerts-observability.metrics.alerts
│ proc [kibana] log [18:37:09.700] [info][plugins][ruleRegistry] Installing resources for index .alerts-observability.apm.alerts
│ info [o.e.c.m.MetadataCreateIndexService] [ftr] [.apm-custom-link] creating index, cause [api], templates [], shards [1]/[1]
│ info [o.e.c.r.a.AllocationService] [ftr] updating number_of_replicas to [0] for indices [.apm-custom-link]
│ info [o.e.c.m.MetadataCreateIndexService] [ftr] [.apm-agent-configuration] creating index, cause [api], templates [], shards [1]/[1]
│ info [o.e.c.r.a.AllocationService] [ftr] updating number_of_replicas to [0] for indices [.apm-agent-configuration]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.kibana_security_session_index_template_1] for index patterns [.kibana_security_session_1]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [.alerts-observability.uptime.alerts-mappings]
│ proc [kibana] log [18:37:09.944] [info][plugins][ruleRegistry] Installed resources for index .alerts-observability.uptime.alerts
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [.alerts-observability.logs.alerts-mappings]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [.alerts-observability.apm.alerts-mappings]
│ proc [kibana] log [18:37:09.983] [info][plugins][ruleRegistry] Installed resources for index .alerts-observability.logs.alerts
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [.alerts-observability.metrics.alerts-mappings]
│ proc [kibana] log [18:37:10.050] [info][plugins][ruleRegistry] Installed resources for index .alerts-observability.apm.alerts
│ info [o.e.c.r.a.AllocationService] [ftr] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.apm-custom-link][0], [.apm-agent-configuration][0]]]).
│ proc [kibana] log [18:37:10.099] [info][plugins][ruleRegistry] Installed resources for index .alerts-observability.metrics.alerts
│ info [o.e.c.m.MetadataCreateIndexService] [ftr] [.kibana_security_session_1] creating index, cause [api], templates [.kibana_security_session_index_template_1], shards [1]/[0]
│ info [o.e.c.r.a.AllocationService] [ftr] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.kibana_security_session_1][0]]]).
│ info [o.e.c.m.MetadataMappingService] [ftr] [.kibana_7.17.27_001/MyPkeO2kSXecDn-vttrwxQ] update_mapping [_doc]
│ info [o.e.c.m.MetadataMappingService] [ftr] [.kibana_7.17.27_001/MyPkeO2kSXecDn-vttrwxQ] update_mapping [_doc]
│ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [kibana-event-log-policy]
│ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.kibana-event-log-7.17.27-snapshot-template] for index patterns [.kibana-event-log-7.17.27-snapshot-*]
│ info [o.e.c.m.MetadataCreateIndexService] [ftr] [.kibana-event-log-7.17.27-snapshot-000001] creating index, cause [api], templates [.kibana-event-log-7.17.27-snapshot-template], shards [1]/[1]
│ info [o.e.c.r.a.AllocationService] [ftr] updating number_of_replicas to [0] for indices [.kibana-event-log-7.17.27-snapshot-000001]
│ info [o.e.x.i.IndexLifecycleTransition] [ftr] moving index [.kibana-event-log-7.17.27-snapshot-000001] from [null] to [{"phase":"new","action":"complete","name":"complete"}] in policy [kibana-event-log-policy]
│ info [o.e.c.r.a.AllocationService] [ftr] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.kibana-event-log-7.17.27-snapshot-000001][0]]]).
│ info [o.e.x.i.IndexLifecycleTransition] [ftr] moving index [.kibana-event-log-7.17.27-snapshot-000001] from [{"phase":"new","action":"complete","name":"complete"}] to [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] in policy [kibana-event-log-policy]
│ info [o.e.x.i.IndexLifecycleTransition] [ftr] moving index [.kibana-event-log-7.17.27-snapshot-000001] from [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] to [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}] in policy [kibana-event-log-policy]
│ proc [kibana] log [18:37:11.349] [info][plugins][securitySolution] Dependent plugin setup complete - Starting ManifestTask
│ proc [kibana] log [18:37:11.718] [info][chromium][plugins][reporting] Browser executable: /opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1733335550727157004/elastic/kibana-pull-request/kibana-build-xpack/x-pack/plugins/reporting/chromium/headless_shell-linux_x64/headless_shell
│ proc [kibana] log [18:37:11.735] [info][plugins][reporting][store] Creating ILM policy for managing reporting indices: kibana-reporting
│ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [kibana-reporting]
│ proc [kibana] log [18:37:12.305] [info][0][1][endpoint:metadata-check-transforms-task:0][plugins][securitySolution] no endpoint metadata transforms found
│ info [o.e.c.m.MetadataMappingService] [ftr] [.kibana_7.17.27_001/MyPkeO2kSXecDn-vttrwxQ] update_mapping [_doc]
│ info [o.e.c.m.MetadataCreateIndexService] [ftr] [.ds-ilm-history-5-2024.12.04-000001] creating index, cause [initialize_data_stream], templates [ilm-history], shards [1]/[0]
│ info [o.e.c.m.MetadataCreateDataStreamService] [ftr] adding data stream [ilm-history-5] with write index [.ds-ilm-history-5-2024.12.04-000001], backing indices [], and aliases []
│ info [o.e.c.r.a.AllocationService] [ftr] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.ds-ilm-history-5-2024.12.04-000001][0]]]).
│ info [o.e.x.i.IndexLifecycleTransition] [ftr] moving index [.ds-ilm-history-5-2024.12.04-000001] from [null] to [{"phase":"new","action":"complete","name":"complete"}] in policy [ilm-history-ilm-policy]
│ proc [kibana] log [18:37:16.452] [info][status] Kibana is now available (was degraded)
│ info Only running suites which are compatible with ES version 7.17.27
│ info Only running suites (and their sub-suites) if they include the tag(s): [ 'ciGroup12' ]
│ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] updated role [system_indices_superuser]
│ info [o.e.x.s.a.u.TransportPutUserAction] [ftr] updated user [system_indices_superuser]
│ info [o.e.x.s.a.u.TransportPutUserAction] [ftr] added user [test_user]
│ info Only running suites which are compatible with ES version 7.17.27
│ info Only running suites (and their sub-suites) if they include the tag(s): [ 'ciGroup12' ]
│ info Starting tests
│ warn debug logs are being captured, only error logs will be written to the console
│
└-: saved objects tagging API
└-> "before all" hook: beforeTestSuite.trigger in "saved objects tagging API"
└-: DELETE /api/saved_objects_tagging/tags/{id}
└-> "before all" hook: beforeTestSuite.trigger for "should delete the tag"
└-> should delete the tag
└-> "before each" hook: global before each for "should delete the tag"
└-> "before each" hook for "should delete the tag"
Trace: {
tag: {
id: 'tag-1',
name: 'tag-1',
description: 'Tag 1',
color: '#FF00FF'
}
}
at Context.<anonymous> (/opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1733335550727157004/elastic/kibana-pull-request/kibana/x-pack/test/saved_object_tagging/api_integration/tagging_api/apis/delete.ts:32:15)
at processTicksAndRejections (node:internal/process/task_queues:95:5)
at Object.apply (/opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1733335550727157004/elastic/kibana-pull-request/kibana/node_modules/@kbn/test/target_node/functional_test_runner/lib/mocha/wrap_function.js:78:16)
Trace: {}
at Context.<anonymous> (/opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1733335550727157004/elastic/kibana-pull-request/kibana/x-pack/test/saved_object_tagging/api_integration/tagging_api/apis/delete.ts:37:15)
at processTicksAndRejections (node:internal/process/task_queues:95:5)
at Object.apply (/opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1733335550727157004/elastic/kibana-pull-request/kibana/node_modules/@kbn/test/target_node/functional_test_runner/lib/mocha/wrap_function.js:78:16)
└- ✓ pass (669ms)
└-> "after each" hook for "should delete the tag"
└-> should remove references to the deleted tag
└-> "before each" hook: global before each for "should remove references to the deleted tag"
└-> "before each" hook for "should remove references to the deleted tag"
└- ✓ pass (737ms)
└-> "after each" hook for "should remove references to the deleted tag"
└-> "after all" hook: afterTestSuite.trigger for "should remove references to the deleted tag"
└-: POST /api/saved_objects_tagging/tags/create
└-> "before all" hook: beforeTestSuite.trigger for "should create the tag when validation succeed"
└-> should create the tag when validation succeed
└-> "before each" hook: global before each for "should create the tag when validation succeed"
└-> "before each" hook for "should create the tag when validation succeed"
└- ✓ pass (702ms)
└-> "after each" hook for "should create the tag when validation succeed"
└-> should return an error with details when validation failed
└-> "before each" hook: global before each for "should return an error with details when validation failed"
└-> "before each" hook for "should return an error with details when validation failed"
└- ✓ pass (35ms)
└-> "after each" hook for "should return an error with details when validation failed"
└-> "after all" hook: afterTestSuite.trigger for "should return an error with details when validation failed"
└-: POST /api/saved_objects_tagging/tags/{id}
└-> "before all" hook: beforeTestSuite.trigger for "should update the tag when validation succeed"
└-> should update the tag when validation succeed
└-> "before each" hook: global before each for "should update the tag when validation succeed"
└-> "before each" hook for "should update the tag when validation succeed"
└- ✓ pass (713ms)
└-> "after each" hook for "should update the tag when validation succeed"
└-> should return a 404 when trying to update a non existing tag
└-> "before each" hook: global before each for "should return a 404 when trying to update a non existing tag"
└-> "before each" hook for "should return a 404 when trying to update a non existing tag"
└- ✓ pass (42ms)
└-> "after each" hook for "should return a 404 when trying to update a non existing tag"
└-> should return an error with details when validation failed
└-> "before each" hook: global before each for "should return an error with details when validation failed"
└-> "before each" hook for "should return an error with details when validation failed"
└- ✓ pass (23ms)
└-> "after each" hook for "should return an error with details when validation failed"
└-> "after all" hook: afterTestSuite.trigger for "should return an error with details when validation failed"
└-: POST /api/saved_objects_tagging/assignments/update_by_tags
└-> "before all" hook: beforeTestSuite.trigger for "allows to update tag assignments"
└-> allows to update tag assignments
└-> "before each" hook: global before each for "allows to update tag assignments"
└-> "before each" hook for "allows to update tag assignments"
└- ✓ pass (785ms)
└-> "after each" hook for "allows to update tag assignments"
└-> returns an error when trying to assign to non-taggable types
└-> "before each" hook: global before each for "returns an error when trying to assign to non-taggable types"
└-> "before each" hook for "returns an error when trying to assign to non-taggable types"
└- ✓ pass (40ms)
└-> "after each" hook for "returns an error when trying to assign to non-taggable types"
└-> returns an error when both `assign` and `unassign` are unspecified
└-> "before each" hook: global before each for "returns an error when both `assign` and `unassign` are unspecified"
└-> "before each" hook for "returns an error when both `assign` and `unassign` are unspecified"
└- ✓ pass (28ms)
└-> "after each" hook for "returns an error when both `assign` and `unassign` are unspecified"
└-> "after all" hook: afterTestSuite.trigger for "returns an error when both `assign` and `unassign` are unspecified"
└-: saved_object_tagging usage collector data
└-> "before all" hook: beforeTestSuite.trigger for "collects the expected data"
└-> collects the expected data
└-> "before each" hook: global before each for "collects the expected data"
└-> "before each" hook for "collects the expected data"
│ proc [kibana] {"ecs":{"version":"1.12.0"},"@timestamp":"2024-12-04T18:37:32.571+00:00","message":"Elasticsearch deprecation: 299 Elasticsearch-7.17.27-SNAPSHOT-2dc764dee61bf435614bcb4d6825752187f10d99 \"this request accesses system indices: [.security-7, .tasks], but in a future major version, direct access to system indices will be prevented by default\"\nOrigin:kibana\nQuery:\n200 - 2.0B\nGET /*/_mapping?filter_path=*.mappings._meta.beat%2C*.mappings._meta.package.name%2C*.mappings._meta.managed_by%2C*.mappings.properties.ecs.properties.version.type%2C*.mappings.properties.data_stream.properties.type.value%2C*.mappings.properties.data_stream.properties.dataset.value","log":{"level":"DEBUG","logger":"elasticsearch.deprecation"},"process":{"pid":7279}}
└- ✓ pass (827ms)
└-> "after each" hook for "collects the expected data"
└-> "after all" hook: afterTestSuite.trigger for "collects the expected data"
└-> "after all" hook: afterTestSuite.trigger in "saved objects tagging API"
│
│11 passing (15.1s)
│
│ proc [kibana] log [18:37:33.340] [info][plugins-system][standard] Stopping all plugins.
│ proc [kibana] log [18:37:33.341] [info][kibana-monitoring][monitoring][monitoring][plugins] Monitoring stats collection is stopped
│ info [kibana] exited with null after 40.9 seconds
│ info [es] stopping node ftr
│ info [o.e.x.m.p.NativeController] [ftr] Native controller process has stopped - no new native processes can be started
│ info [o.e.n.Node] [ftr] stopping ...
│ info [o.e.x.w.WatcherService] [ftr] stopping watch service, reason [shutdown initiated]
│ info [o.e.x.w.WatcherLifeCycleService] [ftr] watcher has stopped and shutdown
│ info [o.e.n.Node] [ftr] stopped
│ info [o.e.n.Node] [ftr] closing ...
│ info [o.e.n.Node] [ftr] closed
│ info [es] stopped
│ info [es] no debug files found, assuming es did not write any
│ info [es] cleanup complete
Loading