Rclone copy crypt to non-crypt 1kb/s very slow

What is the problem you are having with rclone?

A very slow copying of large amount of small files (.md specifically). I understand it has to go through decryption phase before it can be copied but how the hell am I getting just 1kb/s on default rclone settings? My internet can upload 2mb/s easily.

Run the command 'rclone version' and share the full output of the command.

rclone v1.60.1
- os/version: Microsoft Windows 11 Home 21H2 (64 bit)
- os/kernel: 10.0.22000.1219 (x86_64)
- os/type: windows
- os/arch: amd64
- go/version: go1.19.3
- go/linking: static
- go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy "cloud.crypt:dir/" cloud: -P

The rclone config contents with secrets removed.

    name: cloud
        type: drive
        client_id: ....apps.googleusercontent.com # THIS IS MY OWN API KEY
        client_secret: ... # THIS IS MY OWN API KEY
        scope: drive
        token: {...}
        team_drive:
    name: cloud.crypt
        type: crypt
        remote: cloud:
        password: *** ENCRYPTED ***
        password2: *** ENCRYPTED ***

Hi DeutscheGabanna,

Google Drive limits/throttles how many files you can upload per second - this number varies but is very low.

I just tried and uploaded 158 small .md files (the rclone docs) in 58 secs - that is roughly 2 files per second and an upload speed of just 25.123 KiByte/s. I have a gigabit connection and fully tuned settings for Google Drive.

How many files can you upload per second?

It's similar to yours, one or two files per second. It's staggeringly low. So you're saying it's not the crypt's fault but Google's throttling that causes this?

Maybe I could switch to other service providers that have higher limits? What would you recommend?

Correct!

I don't know your needs or the market well enough to give a (useful) recommendation. I can however share some rules of thumb:

Service providers taking a flat rate based on available MB only typically have (hidden and sometimes horrible) rate limits on file creation and file upload speed. Examples are: Google Drive, OneDrive, Dropbox, ...

Service providers taking a fee per request/transaction typically have good speeds and higher cost. Examples are: AWS S3, Backblaze B2, ...

I thought I would be sneaky by having all my files locally and then just syncing periodically to the crypt remote - I believed if most of the files agreed on the checksums, I could take advantage of local speed and still have a backup.

But, unfortunately, it still takes ages to do a check on completely identical directories. Although I do see some progress - I've gone from 1kb/s to 2mb/s. Or from 1 day to 8 minutes. But that's still a lot for, let me repeat, identical directories :smiley:

By the way - what happens if source files are modified while the sync command takes place?

The above rules of thumb also applies to directory listings, Google Drive also has a rate limit of the number of directories/files you can list per second - and the rclone check needs to list the directories in both ends to see if anything has changed since last time. You may see an improvement by adding these flags --checkers=16 --drive-pacer-min-sleep=10ms --drive-pacer-burst=200, but don't expect miracles.

That really depends on a lot of things and would be quite lengthy to answer completely. These are some of the things it depends on: Does it happen before, during or after the check? Was the file already modified? Does it happen before, during or after the transfer? Some of the answers you can probably easily guess.

To me the important thing is that rclone (with default settings) detects if the file was modified during the transfer and then triggers another sync attempt. This is important to avoid uploading corrupt/inconsistent files. I suggest you try a few examples to see what happens.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.