Can't reproduce as of 338138c9d2007e75ebb7f5e97dc5894cb125193d.
- Queries
- All Stories
- Search
- Advanced Search
Advanced Search
Jun 25 2023
Dec 4 2020
Jan 5 2020
Dec 26 2019
A simple example without a value would be:
In T12398#215116, @dvratil wrote:I think we may even be able to used std::future and std::packaged_task directly, which I would prefer to another custom future/promise.
std::future does not have any state signalling, since it's all, so that might be a bit tricky. We still probably need a Qt-friendly wrapper to implement on top of std::future to provide some signal-slot interface and a FutureWatcher. std::packaged_task is an interesting idea, I will certainly look into it.
The error handler must forward/create the result matching what TASK3 expects.
You cannot create a result out of thin air. If the execution has failed, you simply do not have a result of the previous task.
I think we may even be able to used std::future and std::packaged_task directly, which I would prefer to another custom future/promise.
In T12398#214527, @dvratil wrote:My idea is following:
- Introduce Promise and Future template classes, modeled after C++'s std::promise and std::future: The Promise is constructed first by the (a)synchronous task. It only has setter for result value (or error) and a getter to obtain a Future, which can the be returned from the task implementation back to KAsync to wait for a result/error. The Future and the Promise have a shared state (basically a shared pointer to a common private class that holds the result/error. It's possible to have multiple Futures created from the same Promise.
The benefit is that it completely separates the lifetime of the Promise/Future from the Execution and rest of KAsync. This means that we no longer need to pass Future<T> & into every single task that should be asynchronous, cluttering the function's signature. Instead, the function indicates it is an asynchronous task by simply returning Future<T> instead of just T. Another benefit is that this Promise/Future API is more common in modern asynchronous frameworks.
Dec 21 2019
Dec 20 2019
Thanks that did the trick.
Dec 19 2019
OK, I have reverted the commit in master now. Sink tests pass again.
To be honest I think it makes most sense to revert back to default constructing T for the time being.
Dec 18 2019
There's no easy way how to fix this without reverting back to default-constructing T in Future<T>::Future().
It's not exactly the lifetime- it's a more deep-rooted issue with error handling.
Dec 17 2019
In T12315#214103, @dvratil wrote:Generally, I'm wondering if I should deprecate this API and introduce a promise-future based API, where, instead of being passed a future from KAsync, the continuation would construct a Promise object internally a return a Future that KAsync would wait for....internally the continuation would own the Promise - it's closer to the common promise-future pattern and solves the lifetime issue, since the Promise is owned by the continuation, rather than by KAsync (which only holds the Future).
In T12315#214101, @dvratil wrote:Hmm, I believe the bug might be in the Sink code. Here specifically, it's ResourceControl::flush():
The function registers two callbacks, both operating on the future reference. When the resource crashes (happens to me, I don't know if it's part of th etest), then sendFlushCommand fails, so future.setError is called (line 111). However, around the same time a notification is received with information about the crashed resource, invoking the lambda passed to registerHandler function, which then also calls future.setError() (line 98) - depending on which happens first, the future is finished at that point and completes the execution, which may delete the Future object, leaving the other lambda with a dangling reference....
Generally, I'm wondering if I should deprecate this API and introduce a promise-future based API, where, instead of being passed a future from KAsync, the continuation would construct a Promise object internally a return a Future that KAsync would wait for....internally the continuation would own the Promise - it's closer to the common promise-future pattern and solves the lifetime issue, since the Promise is owned by the continuation, rather than by KAsync (which only holds the Future).
I can probably solve the life-time of the future, however you are still calling setError twice on it, which should be undefined behavior - or, following what std::future does, would, well, not throw (because Qt), but might as well assert.
Hmm, I believe the bug might be in the Sink code. Here specifically, it's ResourceControl::flush():
Dec 15 2019
tests/notificationtest in sink will produce a similar crash.
Dec 13 2019
I have rebuilt the flatpak completely and can reproduce the crash. Also, I can reproduce the crash outside of the flatpak again, not sure what I did above.
Dec 9 2019
Ever since I have rebuilt a with -fsanitize=address I can no longer reproduce outside of the flatpak, so there's a chance that the fault is with flatpaks internal build chaching that somehow results in something that crashes (I can't think of a reasonable scenario, but who knows).
I'll try to completely rebuild the flatpak as well and see if this fixes the issue.
Dec 6 2019
Nothing very obvious from the address sanitizer so far, but I also haven't managed to run kube with it yet (only sinksh).
I'll try that. Next week is fine, no hurry.
Looks like some memory issue...could you try compiling with -fsanitize=address? I won't get to look into it properly before some time next week, sorry.
Dec 4 2019
FWIW; I have attempted but failed to reproduce this in a kasync testcase. I can reproduce it by starting latest kube with latest sink and latest kasync, and I can reproduce the crash in both flatpak (which I have now reverted to kasync 0.3.0), and a locally built kube.
Dec 3 2019
May 20 2019
I'm building with clang on windows meanwhile, and that works fine.
Mar 2 2018
Apr 6 2017
The guard facility should probably be added on the job level, otherwise subjobs can't use the facility to protect themselves.
Guards could work similarly to the context. Call .guard(QObject*/smart pointer) and be sure that no continuation will be called if the guard is gone.
Mar 31 2017
Or just do .exec(guard);
The guard object could be any qobject/smart pointer/...
Mar 8 2017
Nice!
Fixed in master (see the commit message for details).
Mar 3 2017
Mar 2 2017
Mar 1 2017
I'd like to do a first 0.1 release as soon as the frameworks removal is merged.
Feb 10 2017
Thanks for the patch! (I applied it locally and will push it in a bit).
Feb 6 2017
Ok, I thought that clang had a different syntax for attributes. I found the same error in Sink, this is a patch for it.
Feb 5 2017
Your initial patch was correct, the KASYNC_EXPORT macro should be at the beginning of the line, Clang is maybe more forgiving regarding the placement of attributes than GCC.... I fixed it in master.
Feb 4 2017
Sorry, I found this just now. You are using clang. Using options from that json file it builds fine.
Jan 24 2017
Jan 12 2017
It does now with the new .then implementation (the decltype based one). The above example no longer compiles if you get it wrong.
Finally managed to resolve this issue for good (I hope).
Dec 2 2016
Implemented.
Dec 1 2016
Nov 29 2016
I'm guessing this is no longer active
Sep 16 2016
Aug 10 2016
please let me know what you think about the overhaul, and whether you see any problems with the approach. cheers
Jul 28 2016
I have implemented most of this (though it changed a bit) in the dev/kasync2 branch and already ported Sink to it.
Jul 18 2016
If the handle would become a smart pointer then we could detect if the implementation looses the handle without calling setFinished or setError, and could thus automatically fail the job. Doesn't guard against all failures (if the smart pointer is captured in a lambda that never get's called), but could help with some.
Jul 4 2016
What I forgot is the context:
Job should have a context that is either a QSet<QVariant> or a QMap<QByteArray, QVariant> or both. The primary usecase at this point is to set smart pointers as the context and thus make sure that the external object lives for the duration of the job. This of course means that the context must survive further compositions (you should be able to set the context in a function that returns a job and have the guarantee that the context will be available during execution.
Jun 30 2016
This may be fuzzier than I hoped for, but I hope you can understand the general gist of where I'd like to go with this.
Jun 29 2016
Looks good, thanks for the patch!
Looks good, just make small improvement in the test please.
Implemented proper error aggregation.
Added the proposed changes
Jun 4 2016
Jun 2 2016
May 29 2016
Generally looks good, could you please add a testcase for this?
May 23 2016
Apr 9 2016
Let's see if Ivan has any idea how to solve this.
Feb 10 2016
Huzza! =)
Feb 9 2016
Nice! That is a very clever solution! If there are no side effects, feel free to commit it.
A solution that works is:
Jan 31 2016
Merged. Should we perhaps keep this ticket open until we figure out the cast problem?
Jan 28 2016
Jan 27 2016
Sorry I did not back earlier. The code looks ok, so please merge it as it is, and hopefully we get to solve the cast problem at some point.
Jan 21 2016
I feel you, I wasted a bunch of time as well already ;-)
Well, at least we learned something. I'll probably give it another try over time, but for the time being I can also live with the extra template argument.
So from my side we can also merge it like this, and try in parallel to improve the solution over time.
Jan 20 2016
Yes, the implicit cast to void :) That's where I got stuck too. Basically C++ allows that any function pointer R(*)(T ...) can be cast to void(*)(T ...), which leads to compiler picking either different overload, or throwing ambiguous overload error.
I also tried a tag dispatch like so