Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[chore] [deltatocumulative]: linear histograms #36486

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

sh0rez
Copy link
Member

@sh0rez sh0rez commented Nov 21, 2024

Description

Finishes work started in #35048

That PR only partially introduced a less complex processor architecture by only using it for Sums.
Back then I was not sure of the best way to do it for multiple datatypes, as generics seemed to introduce a lot of complexity regardless of usage.

I since then did of a lot of perf analysis and due to the way Go works (see gcshapes), we do not really gain anything at runtime from using generics, given method calls are still dynamic.

This implementation uses regular Go interfaces and a good old type switch in the hot path (ConsumeMetrics), which lowers mental complexity quite a lot imo.

The value of the new architecture is backed up by the following benchmark:

goos: linux
goarch: arm64
pkg: github.com/open-telemetry/opentelemetry-collector-contrib/processor/deltatocumulativeprocessor
                 │ sums.nested │             sums.linear             │
                 │   sec/op    │   sec/op     vs base                │
Processor/sums-8   56.35µ ± 1%   39.99µ ± 1%  -29.04% (p=0.000 n=10)

                 │  sums.nested  │             sums.linear              │
                 │     B/op      │     B/op      vs base                │
Processor/sums-8   11.520Ki ± 0%   3.683Ki ± 0%  -68.03% (p=0.000 n=10)

                 │ sums.nested │            sums.linear             │
                 │  allocs/op  │ allocs/op   vs base                │
Processor/sums-8    365.0 ± 0%   260.0 ± 0%  -28.77% (p=0.000 n=10)

Testing

This is a refactor, existing tests pass unaltered.

Documentation

not needed

expands the linear architecture to do exponential and fixed-width
histograms.
Comment on lines +107 to +117
switch dp := any(dp).(type) {
case pmetric.NumberDataPoint:
state := any(state).(pmetric.NumberDataPoint)
data.Number{NumberDataPoint: state}.Add(data.Number{NumberDataPoint: dp})
case pmetric.HistogramDataPoint:
state := any(state).(pmetric.HistogramDataPoint)
data.Histogram{HistogramDataPoint: state}.Add(data.Histogram{HistogramDataPoint: dp})
case pmetric.ExponentialHistogramDataPoint:
state := any(state).(pmetric.ExponentialHistogramDataPoint)
data.ExpHistogram{DataPoint: state}.Add(data.ExpHistogram{DataPoint: dp})
}
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This refactor effectively eliminates the need for the data package, as we no longer rely on type characteristics.

I'll refactor datapoint addition in a future PR, making this part more clear, maybe like this:

var add data.Aggregator = new(data.Add)

switch into := any(dp).(type) {
case pmetric.NumberDataPoint:
	add.Numbers(into, dp)
case pmetric.HistogramDataPoint:
	add.Histograms(into, dp)
}

Copy link
Member

@ArthurSens ArthurSens left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did a first pass only through the benchmark. I totally understand my comments are nitpicks, I'm just sharing my personal preference when it comes to code style.

From our conversations I understand you prefer a more "declarative" style, but to me it makes the code way harder to read since I expect thing to run in the order they are written. Scrolling up and down several times until I finally understand what it does makes code less readable in my opinion.

Again, not a blocker!

Comment on lines +67 to +73
var (
_ Any = Sum{}
_ Any = Gauge{}
_ Any = ExpHistogram{}
_ Any = Histogram{}
_ Any = Summary{}
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just curious, why do we need to do this?

Comment on lines +160 to +171
func next[
T interface{ DataPoints() Ps },
Ps interface {
At(int) P
Len() int
},
P interface {
Timestamp() pcommon.Timestamp
SetStartTimestamp(pcommon.Timestamp)
SetTimestamp(pcommon.Timestamp)
},
](sel func(pmetric.Metric) T) func(m pmetric.Metric) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not understand any of this 😭. What are we trying to accomplish here? What is next supposed to do in the benchmark?

I don't mind code duplication if it makes the code more readable 😬

Comment on lines +153 to +155
if err := sdktest.Test(tel(b.N), st.tel.reader); err != nil {
b.Fatal(err)
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this a benchmark or a test? I'm unsure if I'm missing something, but it seems you're trying to do both...?

Is it an option to split them to make the code easier to understand?

Trying to accomplish all things at once also makes the code more fragile since we will also break all things at once if we make a mistake in the future

ts := pcommon.NewTimestampFromTime(now.Add(time.Minute))

cases := []Case{{
name: "sums",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason to split sums, histograms, and exponential histograms into separate benchmarks? Are those metric types expected to be split by separate deltatocumulative processors in real-world scenarios?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your PR description only shows results for sums, so I'm not sure if this was an intentional split or you just forgot to benchmark the rest

Comment on lines +39 to +65
run := func(b *testing.B, proc consumer.Metrics, cs Case) {
md := pmetric.NewMetrics()
ms := md.ResourceMetrics().AppendEmpty().ScopeMetrics().AppendEmpty().Metrics()
for i := range metrics {
m := ms.AppendEmpty()
m.SetName(strconv.Itoa(i))
cs.fill(m)
}

b.ReportAllocs()
b.ResetTimer()
b.StopTimer()

ctx := context.Background()
for range b.N {
for i := range ms.Len() {
cs.next(ms.At(i))
}
req := pmetric.NewMetrics()
md.CopyTo(req)

b.StartTimer()
err := proc.ConsumeMetrics(ctx, req)
b.StopTimer()
require.NoError(b, err)
}
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any special reason to transform this into a function? We're not reusing the code anywhere, so why not just put this inside the b.Run loop?

Comment on lines +33 to +37
type Case struct {
name string
fill func(m pmetric.Metric)
next func(m pmetric.Metric)
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we remove the abstraction of run, we could also move this closer to where it's used.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants