- User Since
- Feb 19 2018, 2:51 PM (77 w, 5 d)
Jun 28 2018
No, didn't work on that, sorry. I think I am better to close the revision.
May 19 2018
Well, I think it would be logical to implement something like that if we intend to make copying cancelable. But we still will have to spoil the destination file if there is no enough space left (which is not always determined correctly, in fact. One example of this is a remote sftp filesystem with a per-user quota set up on a remote machine). So the idea sounds good, but it requires careful implementation and adds a bit of inconsistency to the program behavior. But perhaps in this case it doesn't matter — a user anyway gets a warning when trying to overwrite files.
May 2 2018
If I understand correctly what has happened, without this patch the file at destination path existed after the operation was cancelled, and it might even not be corrupted. Still I believe that it is the source file that may remain after the cancellation, but not the one existed at the destination path. Try the same test with different files (I mean, with different content) at the source and destination paths. After cancelling an overwrite operation you will have either a copy of the source file or a corrupted file if the source file was sufficiently large (more than 1 GB works in my case, but this probably depends on memory size and disk I/O speed).
May 1 2018
Sorry, but I was not able to locate such a behavior. When cancelling an overwrite in the middle of the process I got only a partially copied file at the destination path. Tested in the latest git-stable KDE Neon, the procedure was a copying of the source file to the other folder containing the file with the same name. Perhaps you told about some other kind of an overwrite operation?
Apr 26 2018
Then I probably didn't understand what was proposed in the discussed bug report. When you start writing to the existing file the data that have been in the destination file before the operation become lost. If we want to be able to restore content of destination files after overwrite operation has started then we need to change the whole mechanism of file copying: we should copy source file to some temporary file, delete the destination file an rename temporary file to the destination file name. If this is what was meant by this bug report then I am probably trying to fix the wrong issue. Is it so, or what is the desired behavior in this case?
Apr 25 2018
Yes, this code is ready for review.
As I described in the previous comment, there is still a question concerning handling the full disk error case: it is being handled currently in some ioslaves (at least in file_unix slave), and it needs to be decided whether such handling should be moved from the slaves code to FileCopyJob level or it should be left as is. But I would anyway ask for developers' opinion on this prior to making such changes.
Mar 26 2018
Well, sorry, I did not expect you waiting for me. I had some new version of this patch, but thought about working on unit tests for it and never finished them. I'll upload now what I have to date, hope it will be useful.
Feb 26 2018
In fact, this is exactly the case that I tried to deal with here. In this proposal the destination file gets deleted only if some data has been written into it, so all the previous data are already lost regardless of our cleanup.
Feb 23 2018
It seems that the proposed patch in its current state does not fully prevent possible problems with move operation (did not catch any problems while testing, but looking at the code again I don't think that all is good with it). I suppose I am better to revise it again.
Feb 19 2018
Add a flag which turns off cleaning up on job cancel.
Use m_currentDestURL instead of obtaining the destination file name from an iterator.
I have also done some work on this issue, and my solution for this includes mechanism that should avoid deletion of existing data. I have uploaded my proposal to Phabricator too — hope it will be useful.