Make peak memory consumption easier to understand

Authored by mwolff on May 23 2017, 1:43 PM.

Description

Make peak memory consumption easier to understand

This changes the meaning of the peak metric, but frequent feedback
was that the value we used to show was confusing.

Previously, we tracked the peak memory consumption individually by
backtrace. Only at the end did we then aggregate the total, at
which point we lose the temporal resolution. The new peak example
shows how this goes wrong: it used to show a "peak" of 275B in
allocate_something, while the expected "peak" is actually only
125B.

The new code repurposes the peak metric to mean "contribution to
peak memory consumption". This requires us to update the peak value
for all allocations whenever we hit a new peak, which is a quite
costly procedure and greatly slows down the overall processing time.
One of my test files took ~1.2s in total to be processed before
this patch. Now, it takes ~2.2s in total - so processing is about
45% slower with this patch.

Nevertheless, I think it's worth it as it makes the cost metric
easier to understand and thus hopefully more useful for users of
heaptrack.

Details

Committed
mwolffMay 23 2017, 2:11 PM
Parents
R45:4bf10d84391e: Always use KFormat::MetricBinaryDialect
Branches
Unknown
Tags
Unknown