Handle output logical size changes
Needs RevisionPublic

Authored by romangg on Dec 30 2019, 11:14 PM.

Details

Reviewers
davidedmundson
Group Reviewers
KWin
Summary

The logical size of an output can be explicitly set through the output
management protocol. In this case calculate a view geometry, that is centered
on the output.

Test Plan

Tested with two outputs having different aspect ratios.

Diff Detail

Repository
R108 KWin
Branch
logical-size
Lint
Lint OK
Unit
No Unit Test Coverage
Build Status
Buildable 20430
Build 20448: arc lint + arc unit
romangg created this revision.Dec 30 2019, 11:14 PM
Restricted Application added a project: KWin. · View Herald TranscriptDec 30 2019, 11:14 PM
Restricted Application added a subscriber: kwin. · View Herald Transcript
romangg requested review of this revision.Dec 30 2019, 11:14 PM
davidedmundson requested changes to this revision.Jan 1 2020, 4:25 PM
davidedmundson added a subscriber: davidedmundson.

Good start, the general approach is ok, but it needs a little tweak.


I don't like OutputDeviceInterface::setLogicalSize.

Currently we have a pixel size and a scale, which gives us the logical size.
By adding an explicitly setLogicalSize we have this 3 way binding loop that is very confusing.

Having an explicitly set logical size with a scale right now would break that glViewport, and it's semantically confusing as to what the compositor view size is.


I think all of this can be remedied if we change the output device method to be
OutputDevice::setVirtualMode(QSize)

Which acts as a clip rect for where we render the plane as you have here, then have geometry() can use that instead of pixelSize.

Then scale/positioning etc. will all just work intuitively without extra semantics, egl_gbm_backend.cpp wouldn't need changing.

This revision now requires changes to proceed.Jan 1 2020, 4:25 PM
romangg added a comment.EditedJan 1 2020, 6:46 PM

Good start, the general approach is ok, but it needs a little tweak.


I don't like OutputDeviceInterface::setLogicalSize.

Currently we have a pixel size and a scale, which gives us the logical size.
By adding an explicitly setLogicalSize we have this 3 way binding loop that is very confusing.

Having an explicitly set logical size with a scale right now would break that glViewport, and it's semantically confusing as to what the compositor view size is.

What do you mean with "compositor view size"? The size of the here introduced view geometry?

If the logical size is set explicitly the scale and mode size become irrelevant to what area the output covers in compositor space. I don't find that too confusing per se. But maybe the name should be changed to setLogicalSizeOverride or something like that?


I think all of this can be remedied if we change the output device method to be
OutputDevice::setVirtualMode(QSize)

Which acts as a clip rect for where we render the plane as you have here, then have geometry() can use that instead of pixelSize.

Then scale/positioning etc. will all just work intuitively without extra semantics, egl_gbm_backend.cpp wouldn't need changing.

Interesting approach. I assume you would like to change the protocol request from a logical_size setter to a virtual_mode one as well, right?

When using this for replicating one output source onto another (target) the calculation becomes quite difficult I believe though. The client needs to factor in:

  • Current logical size of source output
  • Current mode of target output
  • Current scale of target output
  • Current transform of target output

For example let's assume we have two outputs:

  1. Ultrabook 13'' source output: 1920x1200, scale 1.2, rotation landscape
  2. TV 50'' target output: 4K, scale 1, rotation portrait

For the setVirtualMode argument we need to find a value such that what is displayed on the source output is also displayed on the target output while respecting the target rotation and scale. But there is another problem: the scale of the target is dependent on the source logical size and target resolution and must be changed as well then.

Logical size of source is 1600x1000. Current mode of target is 3860x2160, but with the rotation what we need to compare the logical size of the source against is actually 2160x3860. To fit it in the virtual mode would be 2160x1350 (by 2160/1600*1000 = 1350). The scaling factor is 1.35. To have sharp images we need an integer scale factor 2 on one of the outputs. Output (1) has this so we do not need to set it on (2).

Now user changes the scale factor of output (1) to 1.

Logical size of source is now 1920x1200. Virtual mode would still be 2160 x 1350 (by 2160/1920*1200 = 1350). The scaling factor is 1.125. For sharp images still an integer scale factor of 2 is needed, so either output (1) or (2) needs to be set to this. Since a change of scale in (1) would alter the depiction overall we need to change it on (2). This would need to be done either via a scale override independent of what is set in OutputDevice or we send this new scale factor via the protocol to KScreen. Should it then ignore this scale factor change?

Now user changes rotation of output (2) to landscape.

Now client needs to fit logical size 1920x1200 into 3860x2160. Virtual mode is now 3456 x 2160 (by 2160/1200*1920 = 3456). Scale factor is 1.8. Like before integer scale factor of output (2) needs to be set to this.


Let's compare with setting the logical size:

Client sends logical size of 1600x1000 for output (2).

Now user changes the scale factor of output (1) to 1.

Client sends logical size of 1920x1200 for output (2).

Now user changes rotation of output (2) to landscape.

Client sends logical size of 1200x1920 for output (2).
Client does not needs to do anything additionally, because logical size override of (2) only depends on current logical size of (1).


In the use case of output replication the API is simpler because semantically it makes more sense to directly set what area should be replicated and not what the end result on the output hardware must look like for that to happen. On a conceptual level weston and mutter do something similar by differentiating between the two separate structs outputs and heads.

Note that above steps I outlined are missing in current patches (setting the wl_output scale factor accordingly when logical size is set and updating logical size on replicas when source output resolution and/or scale is changed).

Forget 2 outputs, this spec falls apart on simple scenarios.

I have a 1920x1080 monitor, I want it cropped to 1000x1000 physical pixels that represent 1000x1000 logical pixels
I have a 1920x1080 monitor, I want it cropped to 1000x1000 physical pixels that represent 500x500 logical pixels

we can't do either.

The main thing this spec is trying to add (as opposed to just using scale) is the ability to crop, yet the spec only supports one possible case where you want to crop in one direction and scale to fit even though the code supports something more versatile.

plugins/platforms/drm/egl_gbm_backend.cpp
366

By passing just a logical size, the scale value is effectively broken, which is a problem as we rely on it here.

Forget 2 outputs, this spec falls apart on simple scenarios.

I have a 1920x1080 monitor, I want it cropped to 1000x1000 physical pixels that represent 1000x1000 logical pixels
I have a 1920x1080 monitor, I want it cropped to 1000x1000 physical pixels that represent 500x500 logical pixels

we can't do either.

And I don't want to do either.

The main thing this spec is trying to add (as opposed to just using scale) is the ability to crop, yet the spec only supports one possible case where you want to crop in one direction and scale to fit even though the code supports something more versatile.

What theoretically is possible is irrelevant for what I want to do with the spec in the next release. I don't need a generic spec to support all kind of output crops, but a very limited one to replicate one output onto another one where the picture is stretched and the aspect ratio preserved. I assume we won't need anything else in the future. If I'm wrong and we do, we can replace the protocol with a v2.

Either explain better why it is necessary to have this more versatile and what exactly do you think needs to be done for that (Replacing the setLogicalSize function with setVirtualMode? Having both?) or live with the less versatile approach in the meantime.

plugins/platforms/drm/egl_gbm_backend.cpp
366

Yes, I forgot to replace the scale usage here two times.