I see that there's been an issue open for some time, https://github.com/ncw/rclone/issues/87
Is there any way to revive this, or add an ounce of priority to it?
I'm uploading somewhat large files, and I've been interrupted 3 times now and have to start all over. Twice was at 90% completion. These interruptions are due to power-outages, and one computer crash.
Uploading a 932 GB file takes a long time on my connection, so being able to resume would save the day! (Literally, would save me quite a few days!)
Thanks for all the work, and thank you for such a stellar project as rclone!
We've got almost all the pieces necessary to make this work now and we've been having discussions on what internal interfaces to add to rclone to make it work, so I think we are closer now than we have ever been!
That's exciting! After another few weeks of uploading this file, a three-second power outage last night cut the transfer. Starting over today, and definitely looking forward to the resume function! Thank you @ncw
I don't think this would be "best practice" for me in the long run, but it would at least guarantee that I don't lose the data in the short-term. I was considering similar, less-attractive options like this, like using Duplicati to try to back it up. I'll give this chunker a shot, and see how that goes, until the resumes are ready to go.
Alas, it cannot. I tried the chunker remote, but the chunk files end up with different names at the end of them. Maybe changing the way it hashes could help, I'm not sure. I went with default, and uploaded until I got to 3% completion, CTRL+C, then started over. The new chunks have the different names, so it starts over at 0% again.