Improve copy performance to gdrive?

What is the problem you are having with rclone?

Copy performance to gdrive is unexpectedly low on fast connection, looking for anything to improve it. I'm currently at a symmetric fiber connection, and I'm able speedtest consistently around 400Mbit/sec, and that's sustainable as far as I can tell, though I'm currently doing it over wifi, and probably won't be able to try it wired. I wanted to copy some larger files, and a limit at 30MByte/sec seemed reasonable, but I can't seem to actually get a sustained transfer speed above ~2-3MByte/sec though that's often on the optimistic side even. I've run it several times over the last 3 days and it doesn't appear to be a transient problem.

I do get what looks like an initial burst around 30MB/sec, though it jumps around a lot when as I retry, so I'm not certain that it is hitting that and then throttling, which is sort of what that looks like. The connection isn't supposed to be throttled though, so I'm working from the basis that is true.

Machine I'm doing this from is a quad core ryzen laptop with 32GB/memory, and the files being uploaded are on external SSD connected over USB3. I don't see any resource limits being hit when I run the transfer.

So right now, mostly checking to see I didn't mess up my command or if there's something I should do differently.
Thanks!

Run the command 'rclone version' and share the full output of the command.

rclone v1.57.0

  • os/version: Microsoft Windows 10 Home 2009 (64 bit)
  • os/kernel: 10.0.19043.1586 (x86_64)
  • os/type: windows
  • os/arch: amd64
  • go/version: go1.17.2
  • go/linking: dynamic
  • go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy 'A:/#stage' 'gd:/#stage' --ignore-existing --verbose --transfers 4 --checkers 8 --bwlimit 30M --contimeout 60s --timeout 300s --retries 3 --low-level-retries 10 --drive-chunk-size 512M --stats 1s --stats-file-name-length 0 --drive-pacer-burst 2000 --drive-pacer-min-sleep 5ms --user-agent ******* -P

The rclone config contents with secrets removed.

[gd]
type = drive
client_id = xxxx.apps.googleusercontent.com
client_secret = xxxx
scope = drive
root_folder_id = xxxx
token = {"access_token":"xxxx","token_type":"Bearer","refresh_token":"1//xxxx","expiry":"2022-04-06T10:34:58.7318335-04:00"}

[secO]
type = crypt
remote = gd:/secO
filename_encryption = obfuscate
password = 

A log from the command with the -vv flag


Transcript started, output file is rclone.txt
PS C:\scripts> .\copystuff.ps1
2022/04/06 10:50:55 INFO  : Starting bandwidth limiter at 30Mi Byte/s
2022/04/06 10:50:55 INFO  : Starting transaction limiter: max 10 transactions/s with burst 1
2022/04/06 10:50:55 DEBUG : rclone: Version "v1.57.0" starting with parameters ["C:\\rclone\\rclone.exe" "copy" "A:/#stage" "secO:/#stage" "--ignore-existing" "--verbose" "--tpslimit"
 "10" "--bwlimit" "30M" "--contimeout" "60s" "--timeout" "300s" "--retries" "3" "--low-level-retries" "10" "--drive-chunk-size" "512M" "--stats" "1s" "--stats-file-name-length" "0" "-
-drive-pacer-burst" "2000" "--drive-pacer-min-sleep" "5ms" "--user-agent" "*******" "-P" "-vv"]
2022/04/06 10:50:55 DEBUG : Creating backend with remote "A:/#stage"
2022/04/06 10:50:55 DEBUG : Using config file from "C:\\Users\\Chiefmas\\AppData\\Roaming\\rclone\\rclone.conf"
2022/04/06 10:50:55 DEBUG : fs cache: renaming cache item "A:/#stage" to be canonical "//?/A:/#stage"
2022/04/06 10:50:55 DEBUG : Creating backend with remote "secO:/#stage"
2022/04/06 10:50:56 DEBUG : Creating backend with remote "gd:/secO/55.#yzgmk"
2022/04/06 10:50:56 DEBUG : gd: detected overridden config - adding "{sPcu0}" suffix to name
2022/04/06 10:50:56 DEBUG : fs cache: renaming cache item "gd:/secO/55.#yzgmk" to be canonical "gd{sPcu0}:secO/55.#yzgmk"
2022/04/06 10:50:56 DEBUG : fs cache: switching user supplied name "gd:/secO/55.#yzgmk" for canonical name "gd{sPcu0}:secO/55.#yzgmk"
2022-04-06 10:50:56 DEBUG : Encrypted drive 'secO:/#stage': Waiting for checks to finish
2022-04-06 10:50:56 DEBUG : Encrypted drive 'secO:/#stage': Waiting for transfers to finish
2022-04-06 10:50:57 DEBUG : 172.pQxO txOP (3199) (6h PzxK LC 57JJ)/40.iJqH mqHI (6422) (9a IsqD Ev 80CC).CAL: Sending chunk 0 length 536870912
Transferred:      313.702 MiB / 73.217 GiB, 0%, 2.607 MiB/s, ETA 7h57m17s
Transferred:            0 / 1, 0%
Elapsed time:      1m58.7s
Transferring:
 * Huge File (1977) (4K scan of 35mm)/Huge File (1977) (4K scan of 35mm).mkv:  0% /73.217Gi, 2.607Mi/s, 7h57m16s
PS C:\scripts> TerminatingError(): "The pipeline has been stopped."
>> TerminatingError(): "The pipeline has been stopped."
>> TerminatingError(): "The pipeline has been stopped."
PS C:\scripts> Stop-Transcript
**********************
PowerShell transcript end
End time: 20220406105258
**********************

hi,
what if you use a very simple command, copy a single file, without so many flags.
rclone copy 'A:#stage/172.pQxO txOP (3199) (6h PzxK LC 57JJ)/40.iJqH mqHI (6422) (9a IsqD Ev 80CC).CAL' 'gd:#stage' -vv -P

The same result. I just to the most basic copy I could just now, it starts with a 5MB/s burst, and is now running at around 1.7MB/sec. So:
rclone copy 'A:/#stage' 'secO:/#stage' -P

Typically (My normal connection which has much slower upstream) I only run a single file at a time, although with somewhat similar settings otherwise.. I only was running multiple files and such this time because I initially had more files to push. This is the last one to go(and since it's the biggest hasn't managed to complete in the time frame I'm trying to manage).

When multiples were going, it wasn't any better, total bandwidth was still around 2-3MB/sec, just dividing amongst 4 transfers.

I'll try and figure out if I am getting throttled somewhere when I get a chance to, even though I don't think it's that.

I take it you're using RcloneBrowser, since most of those flags are default :wink:

This will probably not help you, but I'm also on symmetrical gig (with a VPS, though), and I have never experienced slow uploads. Depending on what I set with --bwlimit, the connection always gets close to maxed out, even with just one file. A large --drive-chunk-size helps here, of course.

I don't see anything wrong with your command. Here is mine, which is essentially the same:

rclone.exe move --ignore-existing --verbose --transfers 4 --checkers 8 --bwlimit 95M --contimeout 60s --timeout 300s --retries 3 --low-level-retries 10 --stats 1s --stats-file-name-length 0 --drive-chunk-size 1024M --drive-pacer-burst 200 --drive-pacer-min-sleep 10ms --user-agent *******

Actually, no, I just have powershell scripts or run commands directly most of the time. I pulled a chunk of those flags from a thread here that came up in search when I was first trying to get the copy better performance, but that is based on mounting a drive, so I went looking for optimizing in the send-to-gdrive direction in case it was different for a copy to a remote.

I mean, I do use rclone browser but only because my VPS uses it to do the config and add mount points, but I don't tend to myself otherwise.

So it is starting to seem like maybe a throttle is happening then. I'll make sure I try some other upload testing tonight.

Thanks!

1 Like

hi,

very strange, a few minutes ago, i could not get more than 5MiB/s to gdrive.
now, 70MiB/s

ISP issue perhaps?

well, could be isp or a glitch in the matrix, tho i did a speed test, when the gdrive speeds were slow and it was good.
very strange indeed but then again, i do not use gdrive on a daily basis.

fwiw, the only thing that changed between the two tests, was that you posted in the forum.

If available in your network you could try binding rclone to your ipv4 or ipv6 address only. I have observed very different results in some cases.

I could try straight IPv6, was actually only on IPv4 initially, I had disabled IPv6 to prevent things from bypassing my Adguard DNS.

Anyway, no improvements managed so far, but I didn't get a chance to try testing upload speeds to some other places to see if it's more than just gdrive I have the issue with.

thanks!

using ipv4 or ipv6, should not make a difference.
similar to you, i always disable ipv6 on all networks i manage.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.