Optimize Bulk Upload - GDrive

What is the problem you are having with rclone?

Looking for suggestions/feedback on best way to get as much data quickly into GDrive. Currently have 9 user accounts (still in my free period) setup and each is feeding data to a shared drive.

Run the command 'rclone version' and share the full output of the command.

$ rclone version
rclone v1.56.2
- os/version: ubuntu 20.04 (64 bit)
- os/kernel: 5.4.0-962201191901-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.16.8
- go/linking: static
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

GDrive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy -P --stats 0 --min-size 10M --modify-window 2s --size-only --bwlimit=8650k --checkers 1 --transfers 1 --buffer-size 0  --use-mmap /mnt/disks/s01/media/ Encrypted-01:

The rclone config contents with secrets removed.

[GDrive-01]
type = drive
client_id = 910563921291-3uiac2usfk3javmcj5vm4e3sf387v6tj.apps.googleusercontent.com
client_secret = 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
scope = drive
root_folder_id = 
token = {"access_token":"XXXXXX","token_type":"Bearer","refresh_token":"XXXXXX","expiry":"2022-02-14T12:24:24.295484237-06:00"}
team_drive = XXXXXX

[Encrypted-01]
type = crypt
remote = GDrive-01:media
password = XXXXXX
password2 = XXXXXX

I have the exact same config repeated 8 more times, just each GDrive-0X was authenticated with a different user.

A log from the command with the -vv flag


Like I mentioned, I have multiple users running concurrently and it's copying the data just fine. I have my data split up across 9 different drives, but they aren't all the same size. I user mergerfs to present a single mount point to applications (such as Plex) and haven't had any issues.

Since some of the copy commands will finish before others (smaller drives), what would be the best way to get the remaining files copied? Does the file check run at the beginning of each file transfer or just when the command is run and then copies everything it "found" was missing? Since I'm running multiple sessions, seems like this would result in a bunch of duplicate files if I tried to run multiple sessions against the mergerfs mount point.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.