Estimated read time: 3 minutes
In from a living document to input handling, I wrote about how we handle touch and on-screen keyboard events in the LibreOffice Android app. A next step in this TDF-funded project is to provide more UI elements which are specific to touch devices: selections is one of them.
Here are the problems we had to solve to get this working:
Long push is not an event core would recognize.
If you use the mouse and have a selection in Writer, it’s only possible to extend the end of it. If you use the keyboard, then it’s possible to shrink the end of it, but still no adjustment of the start. On touch devices, it’s natural to have selection handles at the start and end of the selection and be able to adjust both, in both directions.
Additionally, when the user drags the selection handles, the expected behavior is that the position of the selection and the handle are never the same: the handle is placed below the selection position and when you drag the handle, the new selection position is above the handle… ;-)
Long push is reasonable to map to double mouse click, as in both cases e.g. in
Writer the user expects to have a select word action. But for the adjustment
of selections, we really had to define a new API
lok::Document::setTextSelection()) to allow setting the start or end of the
selection to a new logical (in document coordinates, not paragraph / character
If you are interested how this looks like, here is a demo:
An other direction we’re working towards is to have the same features in other applications as well: Impress and Calc. Perhaps not so surprisingly, we hit similar problems in these applications as well that we had to solve in Writer. The typical problems are:
LibreOffice assumes a given portion of the document is visible (visual area), but the Android view is independent from what LO thinks is visible. Example: LO thinks a table is not visible, so it doesn’t send the selection events for the text inside the table, even if it’s in fact visible on the Android app.
Instead of calling Invalidate() and waiting for a timer to call Paint(), at some places direct Paint() is performed, so the tile invalidation notification triggered by Invalidate() is missing → lack of content on Android.
We render each tile into a VirtualDevice — kind of an off-screen rendering — and at some places LO assumed that certain content like the actively edited shape’s text is not interesting, as it’s not interesting "during printing".
LO’s mouse events are in pixels, and then this is translated to mm100 (hunderd of milimeters) or twips in core. So counting in pixels is the common language, while the Android app counts everything in twips, and doesn’t want to care about what would be visible at what pixel on the screen, if LO would run in desktop mode. So we had to make sure that we can pass in event coordinates in twips, and get invalidation coordinates in twips, even if previously it was a mix of mm100, twips and pixels.
Here is how Impress looks like, with working tile invalidation, touch and keyboard handling:
Calc is lagging a bit behind, but it also has working tile invalidation and keyboard handling: