Resume transfer

Having looked through the top few threads on this, the latest I can see is where @ncw says "Rclone doesn't have a resume download/upload if you stop rclone and restart it yet."

Is this something that is due to be implemented? Quite often I lose connection on a slow connection when uploading to a drive remote. This means I have to start the upload of a 50 GB+ file from scratch.

Is there any way to resume such a transfer?

Multiple issues on the topic:

Resume uploads · Issue #87 · rclone/rclone (github.com)

Resumable Uploads for GCS (Google Cloud Storage) · Issue #4794 · rclone/rclone (github.com)

1 Like

Sorry I only searched here.

First github issue I can't work out what's happening. It's been going on for 6+ years and there doesn't seem to be any progress?

The second issue requires the first to be implemented.

Can I assume from the massive threads on github that this is intended to be added as a feature at some point?

If so, do we have any idea of the roughest timeline?

Thanks

So my take, there isn't much traction because folks don't really want it as there isn't much of a drive to get done.

Depending on the need, a developer would need to invest their own time, donate to someone to do it or do it yourself. The only thing you can influence is sponsoring it or finding someone else to do it for you as I'm not a programmer either.

The rclone "want" list is enormous and the resources to do it are not unfortunately.

Based on how old that stuff is, unless someone / something really changed, I wouldn't expect anytime soon. That's just my own personal opinion though so take it as such.

1 Like

Seems like resume interface is on hold until a framework decision happens.

However, I had same issue with multiple 120GB to 500GB files I wanted to sync to Google drive. Even over a solid connection, after days of uploading, something always killed the connection.

I ended up tracking down the rclone beta with resumable chunks interface. Once you configure a chunker drive, running the beta will automatically start sending chunked data. In my configuration, I changed the chunk size to 2GB since my files are very large. A downside is that files stored on Google will appear as multiple parts that are useless without being reassembled on download by rclone. Fortunately, starting rclone with the ncdu option lets you manage your remote storage with all chunks combined into their original single files.

I wanted to use the rclone sync option, but there was a minor bug. So if you don't want to use xargs, find, or a script to run rclone on single files, you'll need to fix the bug by getting the golang development environment and the rclone resume branch from github. You'll also need a tiny diff available from issue 87. Then like:

mkdir git
cd git
git clone --branch resume --single-branch https://github.com/rclone/rclone
cd rclone
git apply __replace_this_with_the_path_to_the_unzipped_diff_file__
make

You might need to do other things besides the above in your environment.. but that's the general idea.

When you get the beta rclone running, you can shut down your computer during an upload and when you turn it back on, running rclone will restart from the current chunk. With sync mode, rclone might even start on a different file, but when it gets back to the interrupted file it will not restart the entire file, just the chunk it was working on. Even if you have multiple internet failures that result in different files starting each time, each file will still only restart the chunk that was active for it during the interruption.

Don't try to setup a root systemd service for rclone. Setup a local user systemd timer so access to rclone cache and config don't cause trouble.

I have to admit that I'm really thrilled with this. Thanks so much to everyone working on rclone.

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.