WIP: Add screen recorder interface
AbandonedPublic

Authored by davidedmundson on Jan 31 2017, 1:12 PM.

Details

Reviewers
None
Group Reviewers
Plasma
Summary

Works as follows:

  • Connect with a DBus interface similar to Screenshot
  • instead of a FD to a save location, clients pass a socket to their

own very very minimal wayland server (with a compositor and shmpool)

  • kwin then connects to that as a client with a wl_surface of the

rendered area, with each frame as a buffer.

This patch contains the relevant effect and a small test app.

Design Rationale:

  • We need the recorder to have a shared memory pool, and only it knows

when it's finished rendering it. The recorder being the "compositor"
provides that

  • Kwin acting as the "client" could easily cover a VNC like case in

future, albeit with some shuffling.

  • Using an existing wayland iface means this could become a standard

Obviously not entirely finished, see TODOs in comments, and we probably
want to include the region selection like Screenshot.

Was writing something to dump frames from kwin, was a good excuse to work
on something useful.

Test Plan

Ran included test app
Dragged window about

Diff Detail

Repository
R108 KWin
Branch
screencast
Lint
No Linters Available
Unit
No Unit Test Coverage
davidedmundson retitled this revision from to WIP: Add screen recorder interface.
davidedmundson updated this object.
davidedmundson edited the test plan for this revision. (Show Details)
davidedmundson added a reviewer: Plasma.
Restricted Application added a project: KWin. · View Herald TranscriptJan 31 2017, 1:12 PM
Restricted Application added subscribers: KWin, kwin, plasma-devel. · View Herald Transcript
romangg added a subscriber: romangg.EditedJan 31 2017, 2:35 PM

Could this also made possible without using the effects pipeline (and GL)? In T4426 we want to explicitly bypass it on fullscreen apps. Since one of the more prominent examples for such is gaming and Twitch is a thing we couldn't do one or the other whenever someone wants to stream/record his game.

Sorry David, but that architecture is a no-go from my point of view. It's absolutely essential that screen recording does not create slow down in the compositor. Reading the texture back is doing exactly that. While that is not needed at all. We have the buffer available in the drm platform. We just need to forward it.

The approach to go is D1231.

davidedmundson abandoned this revision.Jan 31 2017, 5:59 PM

Not entirely. I had planned (except that I needed this to run on X for the thing I was doing locally) to pass EGL window surfaces. It would still include a copy, but it's between textures on the card, not being read back into main memory which is the slow part. Sharing a GBM won't work on all platforms, and you'll never be able to do partial screen capture on the kwin side.

But if something's in progress. Cool.

Sharing a GBM won't work on all platforms

Yes, it won't work on NVIDIA. Just like Wayland in general won't work on NVIDIA. For the nested platforms: fine with me, if recording doesn't work. Fbdev is probably something to consider dropping and hwcomposer is also fd based, so we can also pass it around.

and you'll never be able to do partial screen capture on the kwin side.

aye, totally fine on my side. I rather "capture" too many data and let another process do the heavy work to cut it down.

The patch set which is in work is currently lacking a solution for multiple screens. That's kind of the last blocker to integrate it. Unfortunately we are also lacking ideas on how to make it work with multiple screens ;-)

fredrik added a subscriber: fredrik.Feb 8 2017, 3:11 PM

I know this revision has been abandoned, but I wanted to comment on a few things for future reference.

effects/screencast/screencast.cpp
105

I strongly suggest that you use the damage information from kwin and only download the parts of the framebuffer that have actually changed. This can make a massive difference in performance.

You can keep a screen sized QImage around and keep it in sync by updating areas as they change.

107

Why do you use an intermediate texture instead of reading directly from the framebuffer in the desktop GL case?

You create the texture and blit the framebuffer into it in the GLES case, even though you don't use it.

109

There is no need to use a RenderTarget here. Use glCopyTexSubImage2D() instead, which copies from the framebuffer to the currently bound texture.

114

Don't read back data directly to client memory. Instead create a GL_PIXEL_PACK_BUFFER and download the image into it. That way glReadPixels()/glGetTexImage() schedules the copy, but doesn't block and wait for it to finish. Wait at least one frame before you access the contents of the buffer.

116

Never read back data as GL_RGB. Three component formats are not supported in hardware, so this involves at least a partial software fallback.

The only format/type combination a GLES implementation is required to support is also GL_RGBA/GL_UNSIGNED_BYTE.

I also note that you immediately convert the image to a four-component format below.

118

If you need the image to be in QImage::Format_ARGB32, you should read back the data as GL_BGRA/GL_UNSIGNED_INT_8_8_8_8_REV. This is the GL equivalent of QImage::Format_ARGB32.

121

Use GL_MESA_pack_invert and GL_ANGLE_pack_reverse_row_order so the image is downloaded in the correct orientation.