Rclone sync --backup-dir + rclone copyto(/moveto) not working with large files

Dear rclone guys,

I am using IDrive e2 (AWS S3 compatible) storage provider.

Up to 2024-01-17, "rclone sync --backup-dir" command was working well, but since 2024-01-18, I am experiencing problems with the archival of old files (large ones in fact) to the "backup-dir". I have noticed the same problem with "rclone copyto(/moveto)" commands when attempting to troubleshoot the initial "rclone sync --backup-dir" problem.

To clarify a little bit, the "rclone sync" still works well, only the "--backup-dir" feature fails by leaving the old huge files in their initial location. Concretely, the operation gets stuck forever (more than 24 hours before I manually cancel the process).
The cutoff size seems to be around 8GB, some file moves work and others don't around this size (see the attached .txt file).

I am experiencing this problem with both v1.65.0/v1.65.1/v1.65.2 versions since 2024-01-17; v1.65.0/v1.65.1 were working fine until 2024-01-17.

No changes took place on my side (e.g. config) nor in relation with my subscription to IDrive e2, plus I'm well below my storage limit with IDrive e2.

When I perform a manual "moveto" of the problematic huge files with S3 Browser, it works well.

Please find attached the log file of the problematic "rclone sync --backup-dir" command, plus a text file with my troubleshooting attemps with the "rclone copyto(/moveto)" commands.

As you can see, I have used the "--log-level DEBUG" option, however the output doesn't return any relevant info, therefore any help is more than welcome.

Many hhanks in advance and best regards.

Pastebin.com links:

I wonder if it is related to:

Hello,

The log output from the "rclone sync --backup-dir" problem looks the same [rclone v1.65.1- os/version: Microsoft Windows 10 Pro 22H2 (64 bit)- os/kerne - Pastebin.com].

I am also experiencing another similar problem with "rclone copyto(/moveto)" command(s) [2024-01-17-Wed@21h46m33s [rclone v1.65.1] --> OK2024/01/18 06:14:53 INFO : Sy - Pastebin.com].

Best regards.

welcome to the forum,

might sync a single file and use --dump=headers --retries=1

Hi asdffdsa/jojothehumanmonkey

Tried you suggestion with "copyto" commands and the unsuccessful copy operation returns repetitive "timeout awaiting response headers" error messages.

Pastebin links hereafter :

Further suggestions are more than welcome of course :slight_smile:

Thanks in advance and best regards.

sorry, getting a bit confused, as to what your issue is, as you have a lot of different commands?
some commands use rclone sync, rclone moveto, rclone copyto?
some command have --backup-dir, some commands do not?

please, let's focus on the one command, the simplest command that triggers the issue.


need to post the output of rclone config shows redacted IDe2

not sure what is going on but i would test:
--- copy a single flie smaller than 5GiB
--- test rclone copy, not rclone sync, not rclone copyto, not rclone moveto, etc...
--- do not use aws:kms
--- add --multi-thread-streams=0 --s3-no-check-bucket --s3-no-head --s3-no-head-object --s3-use-multipart-uploads=false

It is the multipart copy which is timing out in the log (this is used to move the old file to the --backup-dir - s3 doesn't have a move primitive so we have to copy it then delete the old one).

I suspect the reason is that we use a large copy chunk size by default.

  --s3-copy-cutoff SizeSuffix   Cutoff for switching to multipart copy (default 4.656Gi)

Try --s3-copy-cutoff 100M instead and see if that helps.

We should probably change the default as I think the 4G is too big. It works fine with AWS but 3rd party providers take longer than 5 minutes some times to copy a 5G chunk.

Hi Nick,

I have used the "--s3-copy-cutoff 100M" option as suggested and it worked well, both :

  • with the "rclone copyto ..." command on a single file, and
  • with the "rclone sync ... --backup-dir ..." on a large set of data

I'll continue monitoring the outcome of future backup tasks as I have only performed the two quick tests above for the time being, but it looks promising.

I still can't explain why commands were working without the "--s3-copy-cutoff 100M" option until 17-Jan-2024 as the "rclone sync ... --backup-dir ..." option had successfully processed more than 100 *.vhdx files of 21+ GB since Apr-2023. An undocumented change of config/settings from the storage provider (IDrive) maybe ? This will remain a mystery.

Anyway, many thanks for your prompt help and best regards.

iDrive it seems is using minio for S3 access - based on some error messages I saw.

To my knowledge they do not provide any release notes related to their internal software changes/updates.

But still overall it is very nice solution (I use it myself) - and fairly priced.

Hi asdffdsa/jojothehumanmonkey,

Many thanks for your help in troubleshooting this problem and best regards.

José-M.

1 Like

Great news.

I suspect that iDrive have changed a timeout, or maybe transfers are taking just a little bit longer because the server has got busier which is tripping the timeout threshold.

I could make a quirk for iDrive reducing the default value, or I could just reduce the default value as I think it is probably too high. However having a high value is good as this is not an often used API and 3rd party providers often have bugs here (eg Chunker: uploads to GCS (S3) fail if the chunk size is greater than the max part size)

Hi kapitainsky,

I agree with you regarding iDrive, nice solution and fairly priced. In regards to the problem I just experienced ... nothing says that it is iDrive-related, it was just an assumption on my side :slight_smile:

Anyway, best regards.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.