Adds debug configuration for stylus latency tracking

Authored by poke1024 on Oct 8 2017, 2:35 PM.



Adds a hidden switch trackTabletEventLatency to enable console output of the maximum latency of tablet move events over all such events in a time window (currently 2.5 s). The latency measured is the time it takes from the event generation in the OS to the beginning of the actual processing in Krita.

From my tests this gives some solid numbers to measure the jerkiness (uneven spacing of lines) seen on some Mac installations when drawing and would be useful for measuring the benefit of optimizations to other code. For example, when reducing the maximum frame rate to 20 fps (D8193), this reduces the tablet move event latency on my test machine (from about 20ms to about 5ms).

Diff Detail

R37 Krita
Automatic diff as part of commit; lint not applicable.
Automatic diff as part of commit; unit tests not applicable.
poke1024 created this revision.Oct 8 2017, 2:35 PM
rempt added a subscriber: rempt.Oct 9 2017, 9:23 AM

Um, I'm getting some weird output:

tablet latency: 234822501 ms

That can't be right, can it?

Hi, @poke1024!

Here is the patch that fixes time source on Linux:

1diff --git a/libs/ui/input/wintab/qxcbconnection.cpp b/libs/ui/input/wintab/qxcbconnection.cpp
2index d597e62..395d0ad 100644
3--- a/libs/ui/input/wintab/qxcbconnection.cpp
4+++ b/libs/ui/input/wintab/qxcbconnection.cpp
5@@ -582,7 +582,7 @@ void QWindowSystemInterface::handleTabletEvent(QWindow *w, const QPointF &local,
6 qreal tangentialPressure, qreal rotation, int z, qint64 uid,
7 Qt::KeyboardModifiers modifiers)
8 {
9- qint64 timestamp = g_eventTimer.elapsed();
10+ qint64 timestamp = g_eventTimer.msecsSinceReference() + g_eventTimer.elapsed();
12 QWindowSystemInterfacePrivate::TabletEvent *e =
13 new QWindowSystemInterfacePrivate::TabletEvent(w, timestamp, local, global, device, pointerType, buttons, pressure,
14@@ -682,7 +682,7 @@ void processTabletEvent(QWindowSystemInterfacePrivate::TabletEvent *e)
16 void QWindowSystemInterface::handleTabletEnterProximityEvent(int device, int pointerType, qint64 uid)
17 {
18- qint64 timestamp = g_eventTimer.elapsed();
19+ qint64 timestamp = g_eventTimer.msecsSinceReference() + g_eventTimer.elapsed();
21 QTabletEvent ev(QEvent::TabletEnterProximity, QPointF(), QPointF(),
22 device, pointerType, 0, 0, 0,
23@@ -694,7 +694,7 @@ void QWindowSystemInterface::handleTabletEnterProximityEvent(int device, int poi
25 void QWindowSystemInterface::handleTabletLeaveProximityEvent(int device, int pointerType, qint64 uid)
26 {
27- qint64 timestamp = g_eventTimer.elapsed();
28+ qint64 timestamp = g_eventTimer.msecsSinceReference() + g_eventTimer.elapsed();
30 QTabletEvent ev(QEvent::TabletLeaveProximity, QPointF(), QPointF(),
31 device, pointerType, 0, 0, 0,

Please also move the LatencyTracker into a separate file and add a bit of documentation, so we could reuse it in other places. Otherwise, the patch looks fine.


  1. Do you have an idea why it shows a bit weird numbers when just starting a stroke:
tablet latency:  42  ms
tablet latency:  59  ms
tablet latency:  59  ms
tablet latency:  59  ms
tablet latency:  1  ms
tablet latency:  1  ms
tablet latency:  1  ms
tablet latency:  1  ms
tablet latency:  1  ms
tablet latency:  1  ms
tablet latency:  1  ms
tablet latency:  1  ms
tablet latency:  1  ms

Such delay happens only in the beginning of the stroke. After the start itself, the delay is negligible.

  1. It might be a good idea just to use boost::accumulators::rolling_mean/rolling_variance for measuring this property, but it doesn't show single-event delays. Is it possible to combine all the three metrics?

The value stays at around 220ms (surface pro) and 440ms (wacom) with WinTab.

With Windows pointer input (surface pro) it goes down like this:

[10236] tablet latency:  298  ms
[10236] tablet latency:  390  ms
[10236] tablet latency:  390  ms
[10236] tablet latency:  103  ms
[10236] tablet latency:  -29  ms
[10236] tablet latency:  -151  ms
[10236] tablet latency:  -286  ms
[10236] tablet latency:  -405  ms
[10236] tablet latency:  -537  ms
[10236] tablet latency:  -664  ms
[10236] tablet latency:  -785  ms
[10236] tablet latency:  -918  ms
[10236] tablet latency:  -1049  ms
[10236] tablet latency:  -1177  ms
[10236] tablet latency:  -1298  ms
[10236] tablet latency:  -1424  ms
[10236] tablet latency:  -1562  ms
[10236] tablet latency:  -1682  ms
[10236] tablet latency:  -1813  ms
[10236] tablet latency:  -1940  ms
[10236] tablet latency:  -2064  ms
[10236] tablet latency:  -2198  ms
[10236] tablet latency:  -2325  ms

I used the OS-provided timestamp when implementing pointer input messages, so this might be the reason... (Though I think the WinTab code uses the timestamp from the event packets too...)


nullptr please


Please consider using dbgTablet instead


Please use a smart pointer instead of a raw pointer here.

dkazakov added inline comments.Oct 10 2017, 7:09 AM

I don't like nullptrs, btw :)

poke1024 updated this revision to Diff 20864.Oct 16 2017, 5:51 PM

Still a work in progress; added classes to a different file, now uses shared pointer, adds rolling mean.

boost's accumulators only support rolling mean and sum, but not rolling variance (their focus seems to be on non-rolling stuff as far as I understood it, which is not really deep I admit). Still, I thought it was not worth taking their rolling mean implementation as it's trivial. We could add rolling variance on our own.

By the way, on OS X the max numbers seem to vary between 10 and 60 ms depending on the load of the system (60 ms being when you play one YouTube video in the background on a 2017 MacBook Pro).

poke1024 marked 2 inline comments as done.Oct 16 2017, 5:56 PM

Dmitry's patch for WinTab linux timestamps is now included. @alvinhochun I'm not sure what Windows uses as native event timestamp, I should probably disable the code for Windows as long as I don't know in order to not print out wrong numbers. QT's docs are not helpful really (they say the input event timestamp will normally be in milliseconds since some arbitrary point in time, such as the time when the system was started.,

Correction; boost does have rolling_variance after all, I just didn't find it in the docs earlier; blast (

poke1024 updated this revision to Diff 20875.Oct 16 2017, 7:55 PM

Now uses boost to calculate rolling mean and variance.

Boost have a 2 years old bug in their rolling means implementation ( Seriously boost?

poke1024 updated this revision to Diff 20886.Oct 17 2017, 4:12 AM

more reusability

dkazakov accepted this revision.Oct 23 2017, 12:43 PM

The patch looks perfectly fine. On linux the latency's mean is 0. I've also tested it on Windows and it the results are the following:

  1. When using Wintab, the timestamp seems to be relative to the stroke start. And it drifts to higher numbers.
  2. When using WinInk, there is a constant mean (about 200ms) and zero variance. It looks like either the events are coming with 200ms delay, or there is a small offset in the sources.

Both variants have some weird effects on the stroke start. The variance jumps to 500-700ms, and then gradually goes down. I don't know what it is related to.

Otherwise the patch is fine and, I guess, you can push it as it is. Anyway, it is just a debugging function. When someone needs it on Windows, he'll just fix it :)

This revision is now accepted and ready to land.Oct 23 2017, 12:43 PM
This revision was automatically updated to reflect the committed changes.