Assistance needed on transfer performance: From Drive to GCS

What is the problem you are having with rclone?

Im trying to measure a good combination of parameters using RClone, I need to move around 100TB of information from Google Drive to Google Cloud Storage. 

I made a test with the following scenarios: 

1st - Multiple files (9) of 5GB: ~45GB in total - it took 3m0.8s. 
2nd - One big file of 500GB - it took 1h42m10.9s 

Im requesting your help to check if there is something wrong with the used parameters, I have ried different combinations of the same with no luck. I really appreciate your guidance.

**My main focus is to ensure that even for multiple bigger files and also for one huge file, I'm using the right command with proper parameters**

Run the command 'rclone version' and share the full output of the command.

rclone v1.66.0

  • os/version: Microsoft Windows Server 2016 Datacenter 1607 (64 bit)
  • os/kernel: 10.0.14393.6897 (x86_64)
  • os/type: windows
  • os/arch: amd64
  • go/version: go1.22.1
  • go/linking: static
  • go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

From Drive to Google Cloud Storage

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone --progress --buffer-size 256M --drive-chunk-size 512M --transfers 10 --checkers 16 --fast-list --drive-impersonate ADDRESS@domain.com --log-file=c:\rclone\timetest6.txt --log-level INFO --gcs-bucket-policy-only  copy Gdrive: 
 Gstorage:

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

[Gdrive]
type = drive
scope = drive
service_account_file = C:\temp\serviceacc.json
team_drive =

[Gstorage]
type = google cloud storage
project_number = XXX
service_account_file = C:\temp\serviceacc.json
object_acl = authenticatedRead
bucket_acl = authenticatedRead
location = us
storage_class = MULTI_REGIONAL

A log from the command that you were trying to run with the -vv flag

Paste  log here

welcome to the forum,

nothing looks wrong.

Thanks, I'm more looking of a suggestion about the combination that I'm using, for example if the values that I'm using has no sense or can be improved

odds are the limiting factor is local internet. not sure the combo matters that much.
can you post the results of internet speed test?

fwiw, having windows server and local internet in the middle is not the good approach.

rent a cheap cloud vm from google, in the same region as the bucket.
run rclone on that vm.

Your 1st transfer is running about 2 Gbit/s and the second about 650 Mbit/s - both very fast transfer rates so I'd say your parameters are pretty good.

Do you know how big the link is?

I guess you are running this on a VM in the Google cloud from those transfer rates.

Assuming you have lots of files, increasing --transfers may increase performance, but Google drive gets a bit sniffy with many operations happening at once. I think you may not do better than 2Gbit/s as I suspect Google drive is probably throttled at that.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.