Without generating mipmaps the output of via glViewport down-scaled images
is blurry when not being scaled by an integer value.
Details
- Reviewers
zzag - Group Reviewers
KWin - Maniphest Tasks
- T10481: Improve scaled buffer rendering
Tested on an Intel setup where the image is fine afterwards. On an AMD setup
I have a crash in mesa (experimental mesa ACO branch).
Diff Detail
- Repository
- R108 KWin
- Branch
- mipmapScaling
- Lint
Lint OK - Unit
No Unit Test Coverage - Build Status
Buildable 15998 Build 16016: arc lint + arc unit
Thanks, I was under the impression we needed to enable smooth filtering, but to me the output always looked the same.
plugins/scenes/opengl/scene_opengl.cpp | ||
---|---|---|
1383 | That's a bit lazy. Always using GL_LINEAR AFAIK is fine, but always generating mipmaps has a cost. We should be able to do some sort of if (windowPixmap->surface()->scale() != output->scale()) We also probably only want to generate mipmaps if the contents have changed. GLTexture has a dirty() flag which signifes this. |
One need to set a mipmapping minimification filter, like GL_LINEAR_MIPMAP_LINEAR, in order to actually use mipmaps.
However, do we actually have to worry about noise level? I don't think so. GL_LINEAR should be more than enough.
plugins/scenes/opengl/scene_opengl.cpp | ||
---|---|---|
1495–1497 | I don't understand this part. Window contents is rendered and after that mipmaps are generated. I think it should be vice versa. |
plugins/scenes/opengl/scene_opengl.cpp | ||
---|---|---|
1383 |
One would need to loop through all surface-entered outputs and check if logical size and mode size don't have a common divisor. I can't say if that would be worth it or if we should create mipmaps just all the time. After all they are generated on the GPU and that very fast.
This makes sense. But can you explain to me how the dirty() flag actually works? I see that it gets set at some point but I don't see where it gets unset again. | |
1495–1497 | Why do you think it should be vice versa? Mipmaps are scaled down copies of a texture such that arbitrary down-scaling can be done quickly and without artifacts. That means the original must have been rendered already so it can be duplicated. Maybe in the end it doesn't make a difference because it's a Gl command on the GPU. |
Hmm, maybe it's indeed enough to just set GL_LINEAR without generating the mipmaps since it looks fine here like that. I have to test on other devices.
plugins/scenes/opengl/scene_opengl.cpp | ||
---|---|---|
1495–1497 | You need to propagate updates from the base layer up to the maximum layer before mapping the texture. If this code were using mipmaps, you'd render a bit outdated contents. |
Let's do a version with with just the gl_linear patch. As that's the only part running in your testing anyway.
Replaced by D23986 with only the filter change but without trying to generate mipmaps. Keeping this in diff separated for reference.