Check and actually measure the performance of different implementations of the Mercator projection
- Mercator
- Developed into a series
- Other approximations.
Check and actually measure the performance of different implementations of the Mercator projection
So I benchmarked the Mercator projection on my desktop PC. AbstractProjection::geoCoordinates() uses MarbleMath::gd() while AbstractProjection::screenCoordinates() uses its inverse MarbleMath::gdInv().
Since screenCoordinates is used by far most of the time we put our focus on gdInv:
100 Million random gdInv() calls take
Currently the MacLaurin powerseries has 17 coefficents. Reducing that to 8 coefficients results in about 1.7 seconds (instead of 3.5).
However reducing the amount of coefficients leads to significantly less accurate results: for Hamburg this change would
mean a roughly 50m offset from the real position (and even worse north from Hamburg).
Even with 13 coefficients we'd see a similar offset for Oslo still.
We are currently using Mac Laurin by default which gives us a significant performance boost over the analytical approach.
Apparenly on a desktop device we can make almost 30 million gdInv calls per second and a single call would require about 3 nanosecs.
Hence on an embedded device it would be approximately 10-30 nanosecs.
Assuming about 1500 geometries with 50 nodes on average we get up to about 1-2.5 msecs for each frame spent on projection calculation.
For 10 fps we get about 10-25 msecs spent on projection calculation.
A solution that would be optimized over the current default would not necessarily require to resemble Mercator acccurately.
However even a rough Mercator approximation for gdInv() would need a fully matching implementation of gd() so that
approxGdInv( approxGd( x ) ) == x (this is not the case if we just reduce coefficients in the Mac Laurin series).
One possible solution might be to use a static or dynamic Lookup Table.