Rclone speed problems

What is the problem you are having with rclone?

speed of sync is 10% the speed of native pcloud client.

I've tested a 2g folder with 70k files, each 32kb big. This is a CryFS encrypted system (www.cryfs.org).

I got the speeds below on different clouds when trying to upload or sync.

Google Drive - 60kb then down to 30kb after a bit
Yandex Drive - As above
Mega - As above but down to 20kb
PCloud - 60kb

Then I tried the same folder using PCloud native linux client to sync = 700kb upload speed!!!!!

It took me about 2 days to upload everything to Yandex via Rclone.

PCloud native sync client managed it in about an hour!!

Why is the PCloud native sync client sooo much faster than RClone?

How can Rclone catch up with this speed discrepency, as I was hoping to use Rclone in the future instead.

What is your rclone version (output from rclone version)

[newuser@manjaro ~]$ rclone version
rclone v1.49.5

  • os/arch: linux/amd64
  • go version: go1.13.1

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Manjaro Linux 64bit

Which cloud storage system are you using? (eg Google Drive)

PCloud
Yandex
Mega
Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

Used RClone Browser
Also tried via command line for copy and sync commands.

A log from the command with the -vv flag (eg output from rclone -vv copy /tmp remote:tmp)

Not kept.

The basic problem here is that a lot of cloud services have limitations in how fast you can send commands (API) or other limits on how many files you can open transfers for pr second. For example, Google Drive has a limit of about 2transfers/sec. These limitations can vary from service to service arbitrarily. I will use Gdrive here as my main example because that's what I know the most details about.

So if transferring 70K tiny files on Gdrive, yes - that will be very slow. Despite the total data not being much, you will very quickly run into the file transfers pr second limit. Large files on the other hand will likely max your bandwidth. If you archived those files before transfer they would go massively faster (hopefully in the future we will get a backend that can perform this transparently for the sake of performance on services like this).

On some providers with more lenient limits (like Wasabi S3, or Google Cloud) you can use a lot more transfers (than the default 4) to help remedy the problem significantly, but that won't help if you run into the limiters.

So TLDR: There is very little clone can do to fix this. These are limits set by the providers. If there were no limits, rclone could run as fast as you wanted.

As the the primary question of "why is PCloud so much faster", I don't know - because I don't know how Pcloud operates internally. It may be giving the impression of transferring much quicker than it actually is due to some sort of local caching (and rclone could do that too if you wanted), but unless it is using some sort of exclusive API with different limits it should run into the exact same limitations no matter the client used.

When we are talking about software I am unfamiliar with I can only really speculate about why you got your results. I'd need a lot more spesific data about the exact experiment setup to make a more educated guess.

Hypothesis: Pcloud may be one of those services that can handle way more transfers than the usual 4. If so, and if Pcloud uses many more by default - this would explain it. I have no idea what limitations Pcloud has (go google it a bit). If so you may make rclone perform equally by using a much higher --transfers 48 setting.

I did think the intentional limitations set by each cloud service may be getting in the way.

The PCloud native client isn't just caching it. It actually uploaded everything 10x faster then when trying rclone with PCloud.

Seems we won't be able to figure out why, so I'll have to stick to the native client for faster speeds.

Pity.

Thanks

I'll try this.

Thanks

Oh there are ways to figure out why... try the more transfers as I said (I can't tell you the optimal transfers - you'd have to google for info). What I can say is that if one client can perform like this - rclone should be able to also, but it might require some adjustment to parameters.

Try using the -P flag on your rclone command when you test
It should give you some visual indication of how fast things are transferring and if most of the transfers just "hang ad wait" when you use really large --transfers or if they all are being processed concurrently.

BTW - using the rclone webGUI is functionally identical for the most part to using command-line. It's transrferring via mount that has some significant limitations you should be aware of - making the mount bad place to test. It effectively has 2 extra layers in between you and the cloud that are needed for OS compatibility.

this did the trick.

speeds shot up to 300kb. Not as fast as the 700kb I got natively, Maybe my isp is more busy as night now. I'll do some tests, but I can live with this.

Would this also work with Mega, Yandex, Google? Couldn't see their limits on google.

Thanks

tested the native one at same time now and it does it at 300-400kb, instead of 250-300 via rclone with --transfers 100. Not sure why its still faster, but not that big a difference now.

Thanks

NP. Do note that too many transfers may actually be counter-productive. There will always be SOME limit to what you can do, and it is usually optimal to stat right under that limit then to try to go way above it. More transfers isn't always going to be better. You may need to fine-tune that number, or find more info on the net about the exact limits you are working with.

Tip for larger files: To get much better bandwidth utilization on large files (for upload only), see if your backend supports a chunk-size flag. These are often small by default to keep memory footprint low, but small chunks significantly hamper throughput due to how TCP ramping works. I recommend using up to 64M if you have the memory for it - but BE AWARE that this much memoy can be used for EACH active transfer so if you used 100 transfers with 64M (6,4GB) you'd very likely run out of RAM and make rclone crash. Set it as appropriate for your workloads.

You haven't shared a log or any command you actually ran so it's hard to tell why something worked or didn't work.

Can you share the command and the log?

was just using the rclone browser.
problem fixed with --transfers 100

thanks

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.