Very slow upload speed to Google Drive (Crypt)

What is the problem you are having with rclone?

My upload speed for a single large file (over 83GB) is very slow. Multiple smaller files typically saturate my internet connection over ethernet, but I've been having trouble with this one large file.

I use my own google creds. Whenever the transfers are successful and fast the API activity looks good. Whenever it switches to the large file it slows down considerably (as you can see it's been running since 4/16 and not much movement (besides the spikes).

What is your rclone version (output from rclone version)

  • rclone v1.51.0
  • os/arch: linux/amd64
  • go version: go1.13.7

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Ubuntu 18.04.4 LTS (Linux 5.0.0-37-generic)

Which cloud storage system are you using? (eg Google Drive)

Google Drive (crypt)

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy /stuff/ GDrive_Crypt:/stuff/ --fast-list --progress --tpslimit=6 --tpslimit-burst=8 --transfers=9 --checkers=9 --log-file=$HOME/rclone.log

A log from the command with the -vv flag (eg output from rclone -vv copy /tmp remote:tmp)

Unfortunately I do not have the logs w/ the verbose flag set. But I have started the command above again w/ the -vv flag set this time.

Best to remove that and just use the defaults.

You want to increase --drive-chunk-size to something larger like 512M or 1024M if you have the memory to spare as that will help a lot of large files.

So I followed your recco and rm'd the flags. I also added the drive-chunk-size flag and it seems to have helped a little bit. It's bizarre that other, smaller, normal files saturate the connection at around ~2.5 MB/s, but this file is always less than 1.0 MB/s.

It's actually gone down from when I took the screenshot, btw. It's at ~750 kB/s now. I also notice that rclone is using 100% on the CPU core that it's running, and it consistently does so. It's also using a crazy amount of RAM!

Could it be that the number of checkers are taking up the rest of the bandwidth? I do have a massive number of files in the source folder that I run the copy on:

$ tree -C . | tail -1
202404 directories, 1699289 files

Everything other than this problematic file has already been transferred.

If you are CPU bound, that would probably be the issue. What specs is it running on?

You may want to try to copy the big file by itself and see how it works out rather than trying the huge sync to see if that is the issue.

Copying that folder specifically fixed the issue! This time it only had to transfer/check ~800 files instead of ~1.7 million files. Saturated the connection completely.

My CPU is this one: Intel Core i5-6500 @ 3.2 GHz (boost up to 3.6 GHz). Are there actual minimum specs with which I could avoid this issue?

Do you have a lot of files in a single directory?

fast-list helps but will consume a lot of CPU/memory if you have very large number of items in a directory.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.