Right-click on touch
Closed, InvalidPublic

Description

At the moment, there is no consistent way to right-click when using a touchscreen device with Plasma. Ideally we would have 100% of our user interfaces accessible without needing to right-click, but that's a long way off for Plasma in particular which currently makes heavy use of context menus (e.g. for widget configuration, app pinning/unpinning, etc). And it's nice to be able to right-click when using a convertible laptop which continues to use the typical desktop UX, just with a finger supplementing the mouse.

Let's brainstorm how we can support right-click with touch.

ngraham created this task.Apr 10 2019, 4:13 PM
ngraham edited projects, added Plasma, VDG; removed KWin.
ngraham edited projects, added KWin; removed Plasma.
ngraham edited projects, added Plasma; removed KWin.
ngraham added a subscriber: VDG.
mglb added a subscriber: mglb.Apr 11 2019, 8:15 AM

What do you think about changing the task to be about all touch actions (and what actions we want to have)? Asking too precise questions will lead to "ok lets use A for X" and later, in another task "no, we can't use A for this, X uses it".

I proposed tap-and-hold in another task, but this is also a good solution for grab and move (window, file icon, dock) action, already used in plasma for widgets. It can be also used for selecting things, including multiple things - in order to move something, it is first selected, and movement is started when a finger move more than some distance (see window movement in kwin). There probably should be additional way to move selectable items (e.g. I selected 12 things, and just now I decided to move them without deselecting the one on which I'll hold my finger), or an element would need to be deselected only when finger didn't move (so, after releasing finger).
As for right click/context menu... Swipe down/up from an element? It would be great to make it visually work like android's status bar - you're pulling a menu from swipe starting point, and unless you move past some threshold (~ 2-3 cm or average thumb widths) it hides after releasing finger, otherwise it fully opens. It wont open accidentally, but you'll probably discover it when swiping randomly. It can conflict with swipe from screen edge, but probably "only" plasma panel would be affected (kwin titlebar's menu is available under a button). To solve this we need to add context menu alternatives to the panel first.

abetts added a subscriber: abetts.Apr 11 2019, 1:56 PM

two-finger tap?

GB_2 added a subscriber: GB_2.Apr 11 2019, 2:01 PM

Why don't we do it like Windows: https://youtu.be/08hW7aWvTKk?t=131 ?

two-finger tap?

The problem is that the applets and UI places that were designed for a mouse typically have a small hit area. This task seems to be specifically about backporting touch friendliness to parts designed for a mouse. I'm not sure there'd be room for two fingers in many of the places.

davidedmundson added a comment.EditedApr 11 2019, 2:20 PM

Writing out GB_2's "do it like windows" in words:

  • press + hold + staying still = shows a context menu
  • press + hold + move = initiates drag

Unfortunately for us, that doesn't match nicely to MouseArea::pressAndHold signal
It does map to the newer input handlers

the newer input handlers

Can you clarify what this means?

mglb added a comment.Apr 11 2019, 4:47 PM
In T10783#181580, @mglb wrote:

As for right click/context menu... Swipe down/up from an element?

I take it back - swipe is used for scrolling

In T10783#181629, @GB_2 wrote:

Why don't we do it like Windows: https://youtu.be/08hW7aWvTKk?t=131 ?

! In T10783#181632, @davidedmundson wrote:

Writing out GB_2's "do it like windows" in words:

  • press + hold + staying still = shows a context menu
  • press + hold + move = initiates drag

Isn't it a bit confusing? Describing it more simply:

  • tap, hold = move
  • tap, hold a bit longer = context menu

Use case:
I have to organize those files. [tap and hold first file] Let's move this [context menu appears] one to... ?

I think without any feedback (vibration or visual), you would need to learn how long to hold in order to trigger movement. Not too long to not show menu, but not too short to not initiate scrolling (tap + move).

GB_2 added a comment.Apr 11 2019, 4:58 PM
In T10783#181634, @mglb wrote:
In T10783#181580, @mglb wrote:

As for right click/context menu... Swipe down/up from an element?

I take it back - swipe is used for scrolling

In T10783#181629, @GB_2 wrote:

Why don't we do it like Windows: https://youtu.be/08hW7aWvTKk?t=131 ?

! In T10783#181632, @davidedmundson wrote:

Writing out GB_2's "do it like windows" in words:

  • press + hold + staying still = shows a context menu
  • press + hold + move = initiates drag

Isn't it a bit confusing? Describing it more simply:

  • tap, hold = move
  • tap, hold a bit longer = context menu

    Use case: I have to organize those files. [tap and hold first file] Let's move this [context menu appears] one to... ?

    I think without any feedback (vibration or visual), you would need to learn how long to hold in order to trigger movement. Not too long to not show menu, but not too short to not initiate scrolling (tap + move).

If you watch the video you can see that if you click and hold it shows a visual indicator (a sqaure). If you release without moving the item, it opens the context menu. If you move the item before releasing it moves the item. Simple.

GB_2 added a comment.Apr 11 2019, 4:58 PM

If it wouldn't work well Windows wouldn't have it.

On trackpads we already have two point touch. If we want to introduce a different/special case for touch screens we should also discuss whether it should be a special case or if for example long press should be right click on trackpads as well.

russh added a subscriber: russh.Jun 17 2019, 9:27 AM

On trackpads we already have two point touch. If we want to introduce a different/special case for touch screens we should also discuss whether it should be a special case or if for example long press should be right click on trackpads as well.

Trackpads are a different use-case, and users may spend time with their fingers resting on them and will not expect a right-click to be generated. I wouldn't expect the same conventions on a trackpad and a touchscreen.

On trackpads we already have two point touch. If we want to introduce a different/special case for touch screens we should also discuss whether it should be a special case or if for example long press should be right click on trackpads as well.

Trackpads are a different use-case, and users may spend time with their fingers resting on them and will not expect a right-click to be generated. I wouldn't expect the same conventions on a trackpad and a touchscreen.

I'm trying to make an argument based on the habits we create for our users. Touch screens and track pads are indeed usually different (not always) but my point is that some users have come to expect using the motion of two finger press on a pressure sensitive surface to mean right click.

Unless I'm misunderstanding I don't think it will have the problem of random misclicks while resting because to long-press you would have already been short-pressing and with that also generating clicks. The way I usually rest my fingers I'm not pressing hard enough to register a click.

I strongly agree with the idea of consistency between touchpad and touchscreen when possible. An example of that can also be a click with two fingers, which could open the context menu in both input methods.

The problem with two-finger tap/click on a touchscreen is that it's not predictable which part of the screen will get clicked. This isn't a problem with a touchpad because there's an always-visible cursor to show you what will get clicked, but on a touchscreen this isn't the case. I think we need to consider touchscreens as separate input devices with their own quirks. People at this point are quite familiar with how touchscreens work and what gestures they generally recognize (e.g. tap, single-finger scroll, and pinch), probably more so than laptop touchpads.

The Question is how you do a two finger tap?
A two finger taps in the same time, this will get problems with small target.
Or you make the two finger tap over a time period (tap first finger, tap a second finger) the first finger is the target and the second finger the signal for right click.

I agree that a two-finger tap is not optimal in many cases, what I was thinking of was rather using both a two finger tap and press and hold in both the touchscreen and the touchpad, thus providing consistence between them.

It doesn't make sense to be consistent between things that aren't the same, though. The interaction patterns are different between touchpad vs touchscreen. It's easy to think they're similar enough to enforce consistency because both involve fingers, but the details between how the fingers are used is big enough that I think they need to be considered separate input devices with their own optimal interaction methods.

Why not both? Removing consistency for no reason doesn't make any sense to me.

Because there's nothing to be consistent with. You wouldn't put a doorknob on your window in the name of consistency on the grounds that they're both portals to the outside. That wouldn't make any sense; the doorknob is optimized for a door, and a different type of control is optimized for a window. It's the same with a touchscreen vs touchpad.

I think the two inputs are similiar enough to achieve a pretty good consistency as a whole, with only few exceptions. Right-click is only one possible gesture for touch screen/pad input, so let me approach this in a more holistic way, using an example of possible gestures (mix of Windows', iPadOS, and own):

gesturetouchpad actiontouchsreen actionconsistent?
one-finger swipemove mousescrollno
one-finger tapclickclickyes
one-finger tap and holdcontext menucontext menuyes
two-fingers swipescrollmove windowno
two-fingers tapcontext menucontext menuyes
two-fingers pinchzoomzoomyes
three-fingers swipeswitch virtual desktopswitch virtual desktopyes
three-fingers tapapplication launcherapplication launcheryes
five-fingers pinchclose appclose appyes

You might disagree with the gestures I choose, but you can see my general point: only one-finger and two-fingers swipe would be inconsistent (for, yes, they really are different input methods) but they get to be consistent in most other gestures. And this would help the user: if pinching activates zoom in a touchpad, the user will expect the same in a touchscreen as well. The only inconsistent gestures are so common that the users already knows about them, so that's not a problem.

antoni added a subscriber: antoni.Jun 21 2019, 11:42 AM

What about one finger duble tap and one finger triple tap?

What about one finger duble tap and one finger triple tap?

That would mean that everytime there's a tap the system should wait some milliseconds to know if it's a tap or double tap, making the system feel less responsive.

sbergeron added a subscriber: sbergeron.EditedJun 21 2019, 12:32 PM

A touchpad is:

  1. smaller than a touchscreen
  2. not visually mapped to the content (no hover semantics, no guide markers contents to help orient a touch relative to viewport)

What this ends up causing is from the first point, we can't possibly map an absolute (x, y) on the touchpad to an absolute (x, y) on a viewport because the accuracy is too low (imagine having to tap the right part of a touchpad that corresponds to where a link is on the screen), and from the second, even if we did, we don't have a way of immediately helping the user orient the touchpad compared to the screen, as drawing tablets can do with a cursor appearing when you hover a pen. This means the most accurate way we can use a touchpad is to have it do relative movements and actions on a stateful on-screen object, a cursor. This changes the touchscreen paradigm of "user action -> on screen action" to "user action -> cursor manipulation -> on screen action". A cursor means we explicitly know where an action will be directed at all times. This actually opens up some other actions that wouldn't make sense on a touchscreen.

Take two finger tap, for instance. For a touchpad with relative movements, we know exactly where the user wants the action to target (the tip of the cursor), so any action can include more touches. On a touchscreen, a two-finger tap is ambiguous as to where the user wants the action to occur. Did they mean for a right click below the first or second finger?

Trying to keep the two input methods consistent I think needs to take into account the fundamentally different way we have to interact with each device to have a precise, non-frustrating, input experience. As counter intuitive as it may sound, I don't think aiming for consistency between them as a primary goal is necessarily going to make the user experience more intuitive.

(P.S. sorry for the wall of text)

Trying to keep the two input methods consistent I think needs to take into account the fundamentally different way we have to interact with each device to have a precise, non-frustrating, input experience.

I agree that it should be taken into account, and the interaction should never feel frustrating. But I think we can easily achieve both without loosing anything. Looking at the table I wrote above, which is very consistent, do you think it would create frustation for the user?

Trying to keep the two input methods consistent I think needs to take into account the fundamentally different way we have to interact with each device to have a precise, non-frustrating, input experience.

I agree that it should be taken into account, and the interaction should never feel frustrating. But I think we can easily achieve both without loosing anything. Looking at the table I wrote above, which is very consistent, do you think it would create frustation for the user?

I agree for the most part, I don't think the 2 and 3 finger taps would work particularly well, though. If we consider that the actions that can currently occur below a cursor are left, middle, and right click then how do we engage a middle click with a touchpad, vs how do we do it with a touchscreen? And if we use two finger tap for context menu on a touchscreen how do we know what point on the viewport should be the target of the action?

A touchpad is:

  1. smaller than a touchscreen
  2. not visually mapped to the content (no hover semantics, no guide markers contents to help orient a touch relative to viewport)

    What this ends up causing is from the first point, we can't possibly map an absolute (x, y) on the touchpad to an absolute (x, y) on a viewport because the accuracy is too low (imagine having to tap the right part of a touchpad that corresponds to where a link is on the screen),

If we draw a rectangle around the pointer this is somewhat possible, but touch screens have this same limitation of course to different degrees - try tapping links on a website without zooming in.

and from the second, even if we did, we don't have a way of immediately helping the user orient the touchpad compared to the screen, as drawing tablets can do with a cursor appearing when you hover a pen. This means the most accurate way we can use a touchpad is to have it do relative movements and actions on a stateful on-screen object, a cursor. This changes the touchscreen paradigm of "user action -> on screen action" to "user action -> cursor manipulation -> on screen action". A cursor means we explicitly know where an action will be directed at all times. This actually opens up some other actions that wouldn't make sense on a touchscreen.

It's also an approximation with touch screens, we can calculate the probability of which element/location between the two points was meant with a bias towards the one that was tapped first. And there will be mistakes, as there always are. I'm not seeing the big problem. And I want to note that touch screen interfaces should be usable by users of all experiences for example those with impaired vision, maybe we can take a step back and think about this from a more holistic sense and not the sense of an 'ideal rational able-bodied user that makes no mistakes'. Let's not be so rigid with our designs.

Take two finger tap, for instance. For a touchpad with relative movements, we know exactly where the user wants the action to target (the tip of the cursor), so any action can include more touches. On a touchscreen, a two-finger tap is ambiguous as to where the user wants the action to occur. Did they mean for a right click below the first or second finger?

It's always about approximations, there's nothing wrong with that. And users also make mistakes, misclick, etc. it's our task to design around that for example by making actions reversible or non-destructive.

Trying to keep the two input methods consistent I think needs to take into account the fundamentally different way we have to interact with each device to have a precise, non-frustrating, input experience. As counter intuitive as it may sound, I don't think aiming for consistency between them as a primary goal is necessarily going to make the user experience more intuitive.

(P.S. sorry for the wall of text)

I agree with you that it shouldn't be the primary goal and that we should be flexible.

I have a more concrete question to the ones that are against two point right-click on touch screens: What should two point touch do on touch screens?

On most touch devices I'm familiar with, it either does nothing or is the start of a pinch gesture. This seems sensible enough to me.

The screen area covered by two fingers is too big to use it for anything that may interact with any given control, many of which are smaller than the area of the fingers. Imagine if your cursor on a desktop PC was like a 150×100 rectangle lol

lavender added a comment.EditedJun 26 2019, 12:33 PM

On most touch devices I'm familiar with, it either does nothing or is the start of a pinch gesture. This seems sensible enough to me.

The screen area covered by two fingers is too big to use it for anything that may interact with any given control, many of which are smaller than the area of the fingers. Imagine if your cursor on a desktop PC was like a 150×100 rectangle lol

That's basically the size of the finger. It's all about approximations. Touch interfaces are bigger for a reason.

Well, it seems like we're not going to convince one another, so I'll bow out and let others have a turn in the discussion.

What is the argument against it?

Re-read my previous messages on the subject if you haven't found my argument against your proposal yet. :)

The problem with two-finger tap/click on a touchscreen is that it's not predictable which part of the screen will get clicked. This isn't a problem with a touchpad because there's an always-visible cursor to show you what will get clicked, but on a touchscreen this isn't the case. I think we need to consider touchscreens as separate input devices with their own quirks. People at this point are quite familiar with how touchscreens work and what gestures they generally recognize (e.g. tap, single-finger scroll, and pinch), probably more so than laptop touchpads.

This was the argument I found and was responding to. I think talking about predictability in this aspect is too vague because it's always an approximation.

Consider this experiment: someone hooks up their touch screen device to a VR device - if we have consistency in gestures it would mean that a touchscreen can 'gracefully degrade' into a touchpad while minimizing the amount of gestures the user needs to relearn.

russh added a comment.Jun 26 2019, 3:56 PM

Surely we see what successful touchscreen implementations look like on other desktop environments and build on their experience, rather than reinventing a touchscreen UX. We seem to be focussing on a right-click/invoking a context-menu;

Is a context menu invoked by a double-tap, a press-and-hold, or a left-press & right-tap? Which of these is already implemented on other DE's / other platforms, and more importantly what's implementable in Plasma Desktop/QT? Which gives the user greater accuracy? Otherwise if all these gestures are possible, let the user choose via some customisation.

Just something I want to add so that it is documented: In KWin we quite regularly get the request to emulate the right mouse click. This is not possible (and/or advisable). Obviously such a feature would belong into the display manager. Meaning on X11 it must be in the X server (no control from our side) and on Wayland it would be KWin. So let's explain why this wouldn't work on Wayland. On Wayland we have dedicated interfaces for touch and pointer (mouse, touchpad) interfaces. Touch does not influence pointer movement and does not emulate left clicks as it was the case on X11. In case an application would announce that it doesn't support touch, we could emulate mouse on touch events. But this is in practice not the case. We mostly interact with Qt, Gtk and XWayland which all announce touch support. For the compositor it is impossible to know whether the application truly supports touch or not. There is just no protocol to announce "hey I can do touch, or hey I cannot do touch". So KWin would need to handle all applications the same way and if KWin would start to emulate right click events it would break applications which already support touch. For example at $work we use KWin as the compositor for a gui application which was designed for touch screens. We don't have a context menu, but we handle the long press event. The moment KWin would start to emulate right click events our application would break. And this would not be the only one, every application starting to make proper use of touch would break. It is not a future proof approach to add such support to KWin.

For the record also introducing a new protocol would not help. Legacy applications would not gain support for it and applications using touch would be broken if it becomes an opt out approach which is kind of a punch in the face for those who did the work.

Given that the only sensible approach is to fix all applications. This would also work on X11 and not only on Wayland.

I made a patch which adds ability to emulate right click for Qt widgets which do not accepted touch events, but Qt devs say "such a feature should live in the platform itself (not in Qt), e.g. at the driver level or similar". So we have a strange situation when libinput (driver devs) have no plans to add right-click emulation to libinput, because it is the wrong layer of the stack and KWin devs think so, but Qt devs believe that it shall by done somewhere on this layer.

russh added a comment.Aug 21 2019, 1:53 PM

can I request that some of the big cheeses (@ngraham @graesslin etc) have a discussion with the QT devs on this? It needs to be agreed where this belongs. I'd agree with @AndreyYashkin and put it in QT.. But no-one cares what I think!

Copying a comment I made on the Qt report after explaining the problems of doing it in the platform.

Brainstorming a 3rd option (which is even more higher level than the proposed patch)
Maybe we could change QWidgetWindow to emit ContextMenuEvents on touch press and hold?
Then there's no fake input anywhere, and probably covers 90% of the cases

That wouldn't magically do any QtQuick cases, but context menus there are limited and adding a pressAndHold hander is relatively easy. Thoughts?

AndreyYashkin added a comment.EditedAug 21 2019, 6:04 PM

No, we need exactly right click events. They are too important for many apps. For instance, how will we mark the cells in KMines without right click event? I backported my patch on 5.12 in order to replace native Qt version under Kubuntu and now I can successfully mark and unmark the cells.

By the way, it seems that Plasma and its widgets are not affected by my patch.

IIRC the objection was that making press-and-hold always mean right-click at the toolkit level could interfere with apps that want to re-implement the press-and-hold behavior for some other behavior. For example a lot of Android apps use press-and-hold to mean "make this list/view editable".

Personally I think a consistent behavior is more important, but I understand that it is a concern.

IIRC the objection was that making press-and-hold always mean right-click at the toolkit level could interfere with apps that want to re-implement the press-and-hold behavior for some other behavior. For example a lot of Android apps use press-and-hold to mean "make this list/view editable".

Personally I think a consistent behavior is more important, but I understand that it is a concern.

Qt synthesizes fake left button press/move/release mouse events for widgets that do not accept (= do not handle) touch events. My patch allows to synthesize right button events too, but widgets which process touch events themselves will not feel any difference, because they don't get fake mouse events.

To be clear, I don't have a problem with your patch - and happily support it.

However, Tor is right that fake events have and do cause headaches. If it doesn't get in then we need to think what simpler alternatives might.


but widgets which process touch events themselves will not feel any difference, because they don't get fake mouse events.

MouseArea.pressAndHold runs based on the synthesised left press event, I don't see anything in your code which would prevent a conflict there.

Same for any widget code that uses a current QTapAndHoldGesture


For instance, how will we mark the cells in KMines

We would have to fix kmines by fixing kmines.

MouseArea.pressAndHold runs based on the synthesised left press event, I don't see anything in your code which would prevent a conflict there.
Same for any widget code that uses a current QTapAndHoldGesture

There should be no conflict. I don't send left click events, untill I am not sure that it is not the beginning of doing right click (while FakeMouseState::CanSimulateRightButton). I copied the behavior of right click synthesation under Windows, which I find to be very successful, and MouseArea.pressAndHold shall work exactly as under Windows. If there no mistakes in my code than the only difference will be the source of right click events ( Qt::MouseEventSynthesizedByQt instead of Qt::MouseEventSynthesizedBySystem). I will check later to make sure.

ikidd added a subscriber: ikidd.Oct 27 2019, 6:44 PM

It seems that Qt devs finally showed interest in this problem. See QTBUG-84179

Abroas added a subscriber: Abroas.
ngraham closed this task as Invalid.May 16 2022, 7:36 PM

Not super relevant as we just need to do the hard work of manually supporting right-click for apps that still need it and don't have a touch-optimized UI.