This has gotten some more attention:
- Queries
- All Stories
- Search
- Advanced Search
Advanced Search
Sep 15 2022
Feb 11 2022
Mar 19 2021
Nov 27 2020
May 22 2020
We have read-only memory hole support and decrypt messages on sync for local search support.
Not really a problem in practice, and a solution would be search.
We're using flatpak
Also, support configuring starttls in the UI (necessary for some services).
May 5 2020
@mbohlender I've taken the liberty of adding you here. I appreciate you're pretty busy.
Apr 4 2020
Initial support for syntaxhighlighting based on sonnet is now available.
I'm closing this due to inactivity, feel free to reopen if it becomes relevant again.
Feb 22 2020
The reason was that we had a concurrent read-only transaction so we ended up accumulating a lot of free pages. Fixed in 0dc8aa249d063a3d6eaa248950c57ed5a1709524
Jan 13 2020
Dec 20 2019
Thanks that did the trick.
Dec 19 2019
OK, I have reverted the commit in master now. Sink tests pass again.
To be honest I think it makes most sense to revert back to default constructing T for the time being.
Dec 18 2019
There's no easy way how to fix this without reverting back to default-constructing T in Future<T>::Future().
It's not exactly the lifetime- it's a more deep-rooted issue with error handling.
Dec 17 2019
In T12315#214103, @dvratil wrote:Generally, I'm wondering if I should deprecate this API and introduce a promise-future based API, where, instead of being passed a future from KAsync, the continuation would construct a Promise object internally a return a Future that KAsync would wait for....internally the continuation would own the Promise - it's closer to the common promise-future pattern and solves the lifetime issue, since the Promise is owned by the continuation, rather than by KAsync (which only holds the Future).
In T12315#214101, @dvratil wrote:Hmm, I believe the bug might be in the Sink code. Here specifically, it's ResourceControl::flush():
The function registers two callbacks, both operating on the future reference. When the resource crashes (happens to me, I don't know if it's part of th etest), then sendFlushCommand fails, so future.setError is called (line 111). However, around the same time a notification is received with information about the crashed resource, invoking the lambda passed to registerHandler function, which then also calls future.setError() (line 98) - depending on which happens first, the future is finished at that point and completes the execution, which may delete the Future object, leaving the other lambda with a dangling reference....
Generally, I'm wondering if I should deprecate this API and introduce a promise-future based API, where, instead of being passed a future from KAsync, the continuation would construct a Promise object internally a return a Future that KAsync would wait for....internally the continuation would own the Promise - it's closer to the common promise-future pattern and solves the lifetime issue, since the Promise is owned by the continuation, rather than by KAsync (which only holds the Future).
I can probably solve the life-time of the future, however you are still calling setError twice on it, which should be undefined behavior - or, following what std::future does, would, well, not throw (because Qt), but might as well assert.
Hmm, I believe the bug might be in the Sink code. Here specifically, it's ResourceControl::flush():
Dec 15 2019
tests/notificationtest in sink will produce a similar crash.
Dec 13 2019
I have rebuilt the flatpak completely and can reproduce the crash. Also, I can reproduce the crash outside of the flatpak again, not sure what I did above.
Dec 9 2019
Ever since I have rebuilt a with -fsanitize=address I can no longer reproduce outside of the flatpak, so there's a chance that the fault is with flatpaks internal build chaching that somehow results in something that crashes (I can't think of a reasonable scenario, but who knows).
I'll try to completely rebuild the flatpak as well and see if this fixes the issue.
Dec 6 2019
Nothing very obvious from the address sanitizer so far, but I also haven't managed to run kube with it yet (only sinksh).
I'll try that. Next week is fine, no hurry.
Looks like some memory issue...could you try compiling with -fsanitize=address? I won't get to look into it properly before some time next week, sorry.
Dec 4 2019
FWIW; I have attempted but failed to reproduce this in a kasync testcase. I can reproduce it by starting latest kube with latest sink and latest kasync, and I can reproduce the crash in both flatpak (which I have now reverted to kasync 0.3.0), and a locally built kube.
Dec 3 2019
Sep 1 2019
Aug 29 2019
Jun 27 2019
See the calendar for a model how this could be implemented.
Jun 26 2019
Jun 16 2019
Jun 8 2019
I managed to get rid of this crash with a patch to xapian (https://github.com/cmollekopf/xapian-core/commit/6061b69c4b2f6b9d310558df1b285b5125364de8) that I have yet to upstream (don't know if they'd accept it since it seems like a compiler bug).
May 20 2019
Apr 23 2019
In T10813#182631, @wozniak wrote:By the way, I had some (not a lot) experience with QML so if I were to be pointed in the right direction, I *might* be able to submit a patch for this.
In T10813#182629, @wozniak wrote:This would be relatively simple to add, but I wouldn't want to have that as an option in the UI
I think it would be *nice* to have it in the UI, but I understand the reasons you might not want it. The reason it would be nice is that on support calls it's easier to tell the user to toggle an UI option, than to mess with a config file.
On the other hand, config-file-only option means users would be less likely to disable it themselves. So I can live with that!
Git branch visualizations were indeed a major inspiration =)
Where we'd put it exactly is indeed up for debate, I think it would in either case have to be in a location that remains available as you scroll down the conversation.
Apr 22 2019
By the way, I had some (not a lot) experience with QML so if I were to be pointed in the right direction, I *might* be able to submit a patch for this.
Regarding the mock-up, I feel it's a bit unnatural to add the thread visualisation widget in the e-mail message view. It would work better in the mail list view. Perhaps a git branch visualisation would work better here somehow? Like here:
https://git.occrp.org/libre/property-map/network/master
This would be relatively simple to add, but I wouldn't want to have that as an option in the UI
This would be relatively simple to add, but I wouldn't want to have that as an option in the UI. Would a configuration-file option work and is there a concrete deployment planned that actually requires such an option?
While it is indeed not currently possible to make any sense of the thread structure in the conversation view (given it's just a flat timeline), I'm afraid having a tree view for mails is not among the goals for now. I contemplated having an extra visualization for the tree structure to complement the conversation view at some point, but have no concrete plans to pursue that further at the moment. M31 is an example mockup of such a solution. Something like in that mockup could be a nice addition if you'd like to work on it, but I'll close this ticket for now as the tree-view in the maillist (the center column), is not going to happen.
Apr 18 2019
Apr 2 2019
Mar 13 2019
In T10599#178637, @mbohlender wrote:I like the general direction. It gives a very clean first impression.
Some issues:
- affordance There is no indication that is is possible to write something under the subject line. I am afraid some users will try to fit their whole email in the subject field. Then again, once the press "Enter" it should all become clear. Some placeholder text for the body could solve this issue.
I like the general direction. It gives a very clean first impression.
Mar 11 2019
Feb 9 2019
This would be 'Epic'.
Feb 5 2019
Jan 30 2019
Jan 29 2019
Jan 14 2019
fwiw, memory-hole also breaks threading (because those headers are also encrypted).
Jan 5 2019
Jan 2 2019
Dec 28 2018
We now execute "kill $(pidof sink_synchronizer)" at the end of the wrapper script to hopefully avoid running multiple synchronizer instances in parallel.
Dec 27 2018
This should be fixed as of bd1ec892f40b24092dcb52a39fd7ffb2e22f5fde
I updated gpg related stuff and we're building sasl ourselves. Seems to work just fine.
Dec 26 2018
Note that clang seems broken, but gcc builds everything fine.
I believe the reason is always if we start a gpg-agent inside the container without the necessary options, which makes the broken default lookup pick the wrong pinentry: https://github.com/flatpak/freedesktop-sdk-images/issues/70
Dec 25 2018
Nov 11 2018
restarting the flatpak can still result in:
Oct 30 2018
Why not use include(ECMEnableSanitizers)?
Oct 25 2018
Oct 16 2018
Sep 7 2018
We still get this but with a different backtrace: