[Beginner] Fastes way to Upload to GDRIVE ('large' Files)

Hello Guys,

i want to Upload ~10TB to GDRIVE and have the following problem.
When i just upload with 4 Transfers (1 Terminal) i dont max out my Upload speed
6,5mbit/s, to get as close as possible to this number i have to open multiple Terminals (currently im using ~8 terminals with 1-4 transfers per Terminal)

When i just use 1 Terminal (1 terminal = 1 instance of rclone running) it doesnt matter how high i set my transfer count i only get like 3mbit/s. The more instances i open the closer i get to my full speed.

But as you might imagine its quite annoying to stop rclone because i dont want to abort upload so i wait for 1 terminal to get 100% on the current file thats beeing uploaded and then exit the terminal

So i would pref to max out my upload speed with 1 terminal. What settings would you recommend for this? I dont know if you guys know JDownloader, but Jdownloader has the option to Download 1 File with multiple Connections and there its easy to max out your speed even if you just download 1 File (Chunks / Connections) - same goes for SabNZBD
Is there an option like this for Rclone? So that i can upload 1 File after another at full speed instead of 8-20 Files parallel.

Files are about 2GB per File.

Ty in advcanded

Edit:/ i can max with 4 transfers when using webdav instead of gdrive

What version of rclone are you using?
What's the full command you are using?
Can you share a log with -vv on it?

Also, what operating system are you using?

At the very minimum you can increase the number of uploaded files by using the --transfers flag. For example, rclone copy remote1:\ remote2:\ --transfers 15 -P will allow you to transfer 15 files at once from one terminal window. The -P flag allows you to see progress in real time.

If you are new to rclone you should really familiarise yourself with rclone commands and rclone flags.

Another thing you might try is to increase the buffer size per file with --drive-chunk-size. For example, rclone copy remote1:\ remote2:\ --transfers 15 --drive-chunk-size 128M will increase the in memory buffer size per file from 16M to 128M.

Lastly, you should create your own Client ID, see the bottom of this document for explanation and instructions.

Apparently, according to the information that Harry referenced, uploading multiple chunks simultaneously to Google is currently not supported by Drive API.

2 Likes

ls -d *[!.BIN.ini.zip.MSOCache.rar.txt] | while read x; do echo "---" && echo $x && rclone sync "$x" "gdrive:/Serien/$x" -P --transfers={N} && echo "---"; done
depending {N} = 1 - 4 depending on the other terminals

$rclone version
rclone v1.51.0

  • os/arch: windows/amd64
  • go version: go1.13.7

--drive-chunk-size looks promissing
i almost get full speed with 4transfers

ls -dr *[!.BIN.ini.zip.MSOCache.rar.txt] | while read x; do echo "---" && echo $x && rclone sync "$x" "gdrive:/Serien/$x" -P --drive-chunk-size 128M && echo "---"; done

98% of my bandwith - that seems to be working great!!! ty

That's good to hear. Happy to help. Mark the issue as solved.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.