Stopping and starting rclone mid transfer

Firstly just started using rclone and its a great product - well done. With a low upload speed, I have two questions.

  1. If I stop a sync to Amazon Cloud Drive and then restart it later - does Amazon have the technology to continue where it left off ? Sorry not really an rcloud topic.

  2. Now you will see the relevance. If some of these cloud providers do have the technology to stop and start on a sync (ie they keep what had been transferred) would it be possible to have some sort of --dont_sync_between_hrs option.

That would help greatly so a large file that takes about 2 weeks to upload (backups) could just do its magic at night and during the day everyone doesn’t feel the burn from uploading large files.

Apologies if this is all pie in the sky with cloud providers and keep up the good work.

Paul

Amazon Drive (as we are supposed to call it now-a-days!) doesn’t have a resume a big upload feature.

Some of the cloud providers do support multipart uploads with resume (eg S3, B2, Drive), but rclone doesn’t support resuming an upload yet: https://github.com/ncw/rclone/issues/87

There are some things you could do to help… One is you could set a --bwlimit for rclone - that will stop rclone killing the bandwidth for everyone.

I think you are probably looking for bwlimit varying by time of day: https://github.com/ncw/rclone/issues/221 - this would be reasonably straightforward to implement - I just haven’t had time. I’d be willing to coach you or anyone that wanted to have a go with it!

:frowning: Shame

Ok thanks for clarification.

Ah yes that does sound good. I wouldn’t mind having a look at the code and seeing if I could make the change. I am currently a C# programmer but have done various coding in the 27 years of being a developer.

Private email perhaps?

And thanks again for the excellent program and help.

Paul

1 Like

That’s actually only true if we look at the official API documentation. But when I am looking at the network traffic from the Amazon Cloud Drive client the API actually does support resume.
It’s not official but here is a example:

GET /cdproxy/resume?nodeId=mqJ9VhFZM8KzdTXcl7Dd-A HTTP/1.1
Accept: application/json
User-Agent: CloudDriveMac/3.7.1.ef2ee1ad
x-amzn-clouddrive-source: XXXXX
x-amz-access-token: XXXXX
x-amz-clouddrive-appid: XXX
x-amzn-RequestId: XXXX
Host: content-eu.drive.amazonaws.com
Accept-Encoding: gzip, deflate

Which returns:

{
"uploadState": "IN_PROGRESS",
"nodeId": "mqJ9VhFZM8KzdTXcl7Dd-A",
"contentLink": "https://content-eu.drive.amazonaws.com/cdproxy/nodes/mqJ9VhFZM8KzdTXcl7Dd-A/content",
"receivedBytes": 31474040,
"expectedBytes": 283957186,
"expectedMd5": "46145008621c63f44c983b6b4c06c54e",
"started": "2016-11-02T09:22:34.583Z"
}

And then you can resume using the Content-Range:

PUT /cdproxy/nodes/mqJ9VhFZM8KzdTXcl7Dd-A/content HTTP/1.1
Accept: application/json
User-Agent: CloudDriveMac/3.7.1.ef2ee1ad
x-amzn-clouddrive-source: XXX
x-amz-access-token: XXXX
x-amz-clouddrive-appid: XXX
x-amzn-RequestId: XXXX
Content-Disposition: form-data; name=file; filename=46145008621c63f44c983b6b4c06c54e
Content-Length: 252483146
Content-Type: application/octet-stream
Content-MD5: RhRQCGIcY/RMmDtrTAbFTg==
Content-Range: bytes 31474040-283957185/283957186
Host: content-eu.drive.amazonaws.com
Accept-Encoding: gzip, deflate

I can provide a full log of such communication if you want

Very interesting...

This is being explored here too - do you fancy joining in that conversation too?

It would be interesting to try that using rclone's app id and see if it works.

x-amzn-clouddrive-source: XXXXX
x-amz-access-token: XXXXX
x-amz-clouddrive-appid: XXX
x-amzn-RequestId: XXXX

So that might be part of a new API...

I applied to join the new API program but was rejected - your app might need to be part of that I guess...

Posted my findings in the issue you referenced

1 Like