Uploading to Google Drive goes 900Mbps, then out of no where, 50Mbps

What is the problem you are having with rclone?

I upload to Google Drive with 3 transfers at 900Mbps, out of no where - this drops to 50-100Mbps. I then kill the rclone move command and run it again, instantly back to 900Mbps.

What is your rclone version (output from rclone version)

rclone v1.50.1

  • os/arch: linux/amd64
  • go version: go1.13.4

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Ubuntu 18.04

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone move "$FROM" "$TO" --transfers=3 --exclude *partial~ --checkers=3 --config /root/.config/rclone/rclone.conf --delete-after --min-age 2m --tpslimit 3 --drive-chunk-size 256M -v --log-file=$LOGFILE

A log from the command with the -vv flag (eg output from rclone -vv copy /tmp remote:tmp)

N/A

welcome to the forum.

  1. each and every time you run rclone, does the speed ftop that much or was this a one-time slow down?
  2. for more details in the log, change -v to -vv.

Thanks!

Every time I run it, it does it. This time around it took about 20 minutes to occur, I then killed and started again and it went 900Mbps and within 5 minutes it slowed down.

The logs with -vv show nothing I'm afraid other than checking & transferring the file with the 1.488M/s speed and a 40 minute upload tgime for a 3.8GB file.

I did however just start it again and this time it's started at 15Mbps... the weird is my network isn't even limited and I'm getting 1Gbps upload to everything.

well, you are running the latest rclone and a recent version of ubuntu.
and a simple speed test will most likely not be helpful.

keep in mind that before rclone will upload a file, rclone will calculate the checksum for that file.
are some of files to be move large in size?

you might want to use the --progress flag, as that will show more info such as average overall speed and instantaneous speed for each upload in progress.

perhaps @thestigma can help out.
i do not use gdrive, as it needs a lot of tweaking to optimize it.
i use wasabi and do not need to worry about gdrive issues.

The files are roughly 3GB in size, it's definetely something weird as as soon as I kill the process and re-run the move - it instantly speeds backup. Could you clarify exactly how --drive-chunk-size works? If that 256M is full, does upload speed drop? If I remove that line, does it use unlimited?

i have never used that flag, as i mentioned gdrive needs a lot of tweaking and has a lot of limitations.
why not just remove the flag and see what happens.

Ok, will try that. I have noticed by the way that it always slows down on a whole number being uploaded, the last time it slowed at 7GB, next time it slowed at 8GB and now it's 5 GB. It's ALWAYS on a whole number....

perhaps there is an issue with your internet connection.

have you ever really pushed the limits of your uploads speeds over a period of time other than rclone?

do you have other ways to upload to gdrive, as a test?

before you call this problem a bug, we need to do much more testing.

i am on microsoft windows, and i have several different tools to upload files to cloud.
i am sure there are many such tools on linux?

When chunking uploads, it splits the file into segments of X megabytes each.
The these effectively act as a separate transfer.
Due to the way TCP works, it needs to ramp up speed, so it make take a couple of seconds to reach maximum potential. Thus the more stop/start there are from each new chunk segment the more performance penalty you will have and thus lower effective bandwidth utilization. This penalty is very low at 64M, and even less at 128M. 256M not worth it unless you have more memory than you know what to do with. The default however is a mere 8M, which I do NOT recommend. Bandwidth utilization on that is rather poor - especially for a gigabit connection.

If you want to disable chunking entirely you can do that too, by setting a very high upload cutoff.
For some reason I need more concurrent transfers to fully use my bandwidth when I do that though. Exactly why that is is something I have to ask Nick about some day.

But in short - it will not be related to this problem. If you use 256M you are effectively already running the highest performance config possible.

Gdrive can take about 40-45MB/sec pr stream (megabyte, not megabit). And you can have as many concurrent streams as you want basically (or more like as many as the new connections/sec limit will allow you) - so in short... it's not a bandwidth issue at google's end.

The most typical problem is that transfer of many small files are slow due to a backend limit of a little over 2 new connections pr. second. However, as I understand you - this happens even on large files? If so that is very curious.

Definitely check the basics first of all. Check that you don't have any packetloss on the network. Do you use any wifi locally for the transfer? Check the disks you are copying from too. If something suddenly starts hitting them hard and it's HDDs they can drop a lot in speed under really bad circumstances. How is your memory situation during transfer? You aren't starting to swap to a swapfile on HDD or something right? 256M chunks do use 256M pr transfer, so that's nearly a GB of memory right there.

I haven't seen any such issue myself where the speed varies that drastically. I only have 160Mbit, but that is always rock-solid assuming the files are of reasonable size. I am not aware of any settings that would cause this phenomenon to happen.

The only thing I take issue with in your command is --tpslimit 3. That is really not needed unless you have a very special setup, otherwise you are just limiting your API to 30% and removing your ability to burst for no good reason. Gdrive has a pacer that handles all this for you anyway. Leave it to default unless you have a very good reason not to.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.