Problem Upload Rclone

Good Morning

I would like now, i execute rclone version 1.48 into windows server 2008 R2 , with this command bellow :
"C:\rclone\rclone.exe sync -uv --timeout=600s --ignore-existing --transfers=5 --checkers=10 --progress C:\FOLDER\ BACKUP-S3:Bucket/ --log-file C:\Script\Logs\sync.log"

My problem is the upload rate, maximum 20 Mb. into linux i dont have this problem, could you help me ?

Best Regards
Fernando Felipe Kuss

When it comes to performance on rclone there are many things you need to be aware of.
One issue is that performance on many small files can be often quite poor. Can you confirm it happens even on large files?

Also, increasing the
--s3-chunk-size should help speed up larger files considerably (at cost of using that much more memory pr upload transfer) as it's only 5MB by default. The default cutoff for --s3-upload-cutoff is 200MB, at which point it starts chunking the files, and if it then uses those 5MB segments you will probably not get good utilization of your bandwidth. I would try setting at least 100MB if you have the ram for it. The larger the better, but much past this point you will likely see sharply diminishing returns unless you have extreme bandwidth.

Also, how much bandwidth do you have? What numbers were you expecting to see?

Increasing --transfers will help if you have lots of small files. You can make it large 32 or 64 - s3 can take it!

Just out of curiosity for my own use - this doesn't apply to Gdrive right? I guess s3 has less aggressive rate-limiting in that regard?

Yes drive has much more serious rate limiting. Because you pay for every transaction on S3, AWS try not to rate limit things.

I need to upload 1 Tb of information to the bucket S3 AWS destination, the file sizes vary between 200 Mb and 1 Gb, my internet link used would be 100 MB, but with this parameter below, it starts with a transfer of 30 Mb and after a while it decreases, would it have any additional parameters to keep to 30Mb or 50 Mb.?

"sync -uv --timeout = 600s --ignore-existing --s3-chunk-size = 100M --s3-upload-cutoff = 100M --transfers = 10 --checkers = 10 --progress"

Most likely the reason it seems to slow down is because it starts out transferring some large efficient files and then hits a group of small ones. That will start to reduce your reported average speed.

If so then the only real workaround for that is to increase transfers further. as NCW said you should be able to go significantly higher than 10, and that will definitely help.

You can also further increase the chunk sizes if you have plenty of ram to spare. This won't help for the small files, but it will make the larger transfers even more efficient.

If none of this works - I think I would doublecheck that it isn't an actual network issue - that you actually have 100Mbit of useable bandwidth at your current location. Anything from traffic from other users to dodgy wifi can cause less than expected throughput. A simple speedtest.net check should suffice.

EDIT: You might also take a look at your data and see if you recognize any folders that have mostly large groups of small files. Archiving those types of folders before upload will make it far more efficient to transfer. This is more of a workaround than a solution - but in some cases it can absolutely be worth it. Any time a folder contains more than a few thousand files I generally ask myself it would make sense to archive this before I store it. Hopefully in the near future there will be an rclone backend that can handle this sort of thing automatically in the background.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.