Rclone sync: Stuck?

No, I think it’s more a matter of fine-tuning the use of parameters (--transfers 1, etc) to handle problems with connection.

In fact, I’m not even sure if the bug label on this topic is appropriate, whether by my posts or by last year’s posts.

Thank you Nick for your support!

P.S.: If you hear in the news that an atomic bomb struck dropbox headquarter, it was me …

OK. Well let me know if you pin something down I can fix!

:smiley:

He he. Likewise Amazon Cloud Drive's HQ :wink:

Having a similar problem here at a different scale. Copying 700 33GB files to a local S3 compatible object storage system. When you start the copy up with --transfers 20, we immediately go to 100% cpu utilization on all 24 cpu cores on this box, and 3GB/s reads off the storage array. Things stay that way for about 5-10 minutes and then the transfer finally starts.

Using --s3-disable-checksum, the transfer starts immediately.

If I were to make a suggestion, it would be to calculate the MD5 as the data is being uploaded, instead of pre-calculating it ahead of time. For a multi-part upload this would seem to make more sense anyway, as with S3 you are having to check the has against the combined parts.

That is possible. However rclone wants the MD5 in the metadata. To put the MD5 in the metadata after the upload requires a COPY operation on s3 which makes costs operations and more importantly makes another version of the file :frowning:

Using --s3-disable-checksum will mean rclone no longer has access to an MD5SUM for that object since it was multipart uploaded. This may or may not be important to you. However the object was uploaded using sha1 checksums for each part so it was integrity checked.