What is the maximum number of transfers (--transfers)?

What is the problem you are having with rclone?

I use Google Drive, I want to transfer a lot of files from my drive to the cloud using an encrypted remote.

I use the command
--transfers to set a transfer rate of 30 files at once
I wonder to myself, is there a line I shouldn't cross?

Run the command 'rclone version' and share the full output of the command.

rclone v1.65.1

  • os/version: Microsoft Windows 11 Pro 22H2
  • os/kernel: 10.0.19045.4046 (x86_64)
  • os/type: windows
  • os/arch: amd64
  • go/version: go1.21.5
  • go/linking: static
  • go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

Google Drive (encrypted remote)

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone sync B:/ crypt:/B --create-empty-src-dirs -P --backup-dir crypt:/B-backup --modify-window 1s --checkers 18 --size-only  --transfers 30 --max-size 3G

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

[Drive]
type = drive
client_id = XXX
client_secret = XXX
scope = drive
token = XXX
team_drive =

[crypt]
type = crypt
remote = Drive:crypt
password = XXX```

there is no hard line.
tho, at some point, there is a soft limit, and increasing transfers will not make a practical difference on transfer speed.
fwiw, establish a baseline, using default values, see that the overall transfer speed is as compared to an internet speed test.

also each backend is different. as gdrive limits the number of transfers per second.
could increase chunk size.

Thanks!
I wanted to know, since I intend to run this command frequently in order to keep my files in constant sync, is it correct to use --modify-window 1s

If I understand correctly according to the documentation, it checks how much time difference there is between my file and the file in the cloud?
I guess it's not the safest to do that but what could be the worst case scenario? Will this cause duplication?
Is there a faster method than checksums and can it be considered safe?

that should not be needed.

well, it is not about the time difference.
if the source file and dest have different modtimes, rclone copies the file.

checksums should not be a problem with gdrive.

rclone does not do constant/real-time sync.

to reduce the number of checks required, might use something like --max-age=24h

For small files Google Drive will only create 2-3 files per second regardless of --transfers value. Setting it higher than your backend can handle often leads to overall slower operations - as things will be throttled and retried.

I think with this particular provider even default values might be too high. Run tests with -vv flag and adjust accordingly.

Won't this result in the calculation of all the checksums and their comparison between the computer and the remote?
checksums take a long time to calculate for a large number of files don't they?

Haha yeah that's my typo, I guess that's the best I'll get for Google Drive cloud encrypted sync, no?
Is there anything better for such a purpose?

--max-age Duration  Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)

Is it lower than the age they were synced?
It is not so clear to me /:

no,
by default, rclone compares size and modtime only, not checksum. so that is quick.
if a file is transferred, then rclone compares the checksums, which with gdrive, takes no time.

yes, it is very confusing.
in other words, if a source file's modtime is 24 hours old or newer, then compare that source file to the dest file.

rclone lsl d:\files\minmax
        0 2024-03-12 00:00:00.000000000 modtime_newer_than_24hours.txt
        0 1970-01-01 00:00:00.000000000 modtime_older_than_24hours.txt

c:\data\rclone>rclone lsl d:\files\minmax --max-age=24h
        0 2024-03-12 00:00:00.000000000 modtime_newer_than_24hours.txt

Oh ok, thanks for the explanation!
If so the correct command for me would be something like this:

rclone sync B:/ crypt:/B --create-empty-src-dirs -P --backup-dir crypt:/B-backup --checkers 18  --transfers 30 --max-size 3G --max-age=24h

?

welcome,

sure, try the command with --dry-run and look at the debug output.
tho i would add --fast-list

and the first time, you want to do a full sync?
if so, might need to remove --max-age=24h

1 Like

Great, thank you so much for all the help mate!

1 Like

thanks for that

1 Like

:grin:

By the way, I wanted to ask, is there any limit on the amount of checkers I can put?
I assume it would be an Input Output limitation of my drive rather than the server side?

There is no strict limit imposed by rclone. However if you set it too high than you will kill either local IO or remote limits - or both. Only way to find optimal value is to run few tests. For your remote (Google drive) probably the best idea is to use defaults as it is known to apply very aggressive throttling if you try to abuse it.

no limit, but as i pointed out with transfers, increasing checks beyond a certain value, not yield improvements and could slow down the overall transfers.
it depends on the storage provider, your internet and your machine.

just have to do some basic testing, then you will know what works for you.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.