Transfer speed (small files)

What is the problem you are having with rclone?

Transfer speed. Unknown if problem.

Run the command 'rclone version' and share the full output of the command.

rclone v1.64.2

  • os/version: debian 12.1 (64 bit)
  • os/kernel: 6.1.0-11-amd64 (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.21.3
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

sudo rclone --ignore-checksum --ignore-size -I -M --no-check-dest copy /home/eco/ ecocloud:EcoServer/ServerBackups/$(date +%Y%m%d)

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

[ecocloud]
type = drive
client_id = XXX
client_secret = XXX
scope = drive
token = XXX
team_drive =

Hello,
I am pretty new to linux, trying to automate daily backups. I've been successful using this tool, thank you very much for providing it. My question is if I can improve the transfer speed somehow.
...
Transferred: 1.085 GiB / 1.085 GiB, 100%, 12.533 KiB/s, ETA 0s
Transferred: 4056 / 4056, 100%
Elapsed time: 38m11.6s
...

If I just do a standard copy paste, command from my windows pc, from server mapped drive, to linked google drive desktop, it takes about 60 seconds.

Google Drive and small files is horrific. You can only create about 2 per second so it's going to be not great. Best bet is to compress/zip/tar things prior to transferring or if you have a ton a of files, you can't do anything else other than wait it out as it's a limit of the API.

Ok great, thank you for the reply! I will look into zipping it before the transfer.

As the other poster said, maybe group and compress files.

Seeing as you could do the same thing in 60sec on another environment.... Maybe try adding --transfers 96 and --checkers 96. I found that the defaults of 4 are very conservative when dealing with small files.

That's actually makes it worse. If you can only transfer 2 per second, trying to jam 96 at a time slows it down even more.

It's best to start with the defaults as they work in most circumstances.

not sure your use-case, but another option might be:

to not re-upload every single file every time you run rclone
check out --backup-dir

I'm making daily backups, I want 10 separate folders :slight_smile:

rclone is not a backup software. You will have much better experience (including speed) using more appropriate tools like restic or kopia - both actually use rclone for some cloud providers. They deduplicate, compress and aggregate small files into bigger chunks - something rclone does not. Rclone is fantastic and very capable tool to transfer files to/from/between clouds/local.

in that case, i would zip the individual files.

I looked at these, but I want to also be able to manually pull files if needed.

I ended up going with 7z, compressing the entire thing, then sending it through rclone. Took 20 seconds to move. Thanks for rclone!!

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.