RClone sync from Google Drive to S3 bucket causing OOM errors

What is the problem you are having with rclone?

Hi, I'm trying to sync our Google Drive folder to our S3 bucket. The RClone is running inside a docker container with the memory limit of 5000M. I've also tried it running normally under my system. But every time it crashes after an OOM error. The time period varies between 30 minutes to 1 and half hour.

Edit: Tried running it on the system without docker and results are same. Memory keeps growing until the whole system freezes.

System memory is 32GB

What is your rclone version (output from rclone version)

rclone v1.54.0
- os/arch: linux/amd64
- go version: go1.15.7

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Ubuntu 20.04
Linux 5.8.0-41-generic #46~20.04.1-Ubuntu SMP
x86_64

Which cloud storage system are you using? (eg Google Drive)

Google Drive
S3 Bucket

The command you were trying to run (eg rclone copy /tmp remote:tmp)

./rclone -vv --dump responses --drive-impersonate EMAIL_ID sync -P GoogleDrive: S3Bucket:ABC-gdrive-backup/ABC --config ./rclone.conf --use-json-log --drive-pacer-burst 8 --drive-skip-shortcuts --drive-list-chunk 40 --drive-chunk-size 4M --checkers 4 --transfers 4 --buffer-size 4M  --tpslimit 4 --rc

The rclone config contents with secrets removed.

[GoogleDrive]
type = drive
scope = drive.readonly
service_account_file = FILE_NAME
client_id = 
root_folder_id = 

[S3Bucket]
type = s3
provider = AWS
env_auth = false
access_key_id =  
secret_access_key = 
region = 
location_constraint = 
use_https = true

A log from the command with the -vv flag

File: rclone
Type: inuse_space
Time: Feb 11, 2021 at 11:31am (NZDT)
Showing nodes accounting for 6GB, 99.83% of 6.01GB total
Dropped 105 nodes (cum <= 0.03GB)
      flat  flat%   sum%        cum   cum%
       6GB 99.83% 99.83%        6GB 99.83%  bytes.makeSlice
         0     0% 99.83%        6GB 99.83%  bytes.(*Buffer).ReadFrom
         0     0% 99.83%        6GB 99.83%  bytes.(*Buffer).grow
         0     0% 99.83%        6GB 99.82%  github.com/rclone/rclone/backend/drive.(*Object).Open
         0     0% 99.83%        6GB 99.82%  github.com/rclone/rclone/backend/drive.(*baseObject).httpResponse
         0     0% 99.83%        6GB 99.83%  github.com/rclone/rclone/backend/drive.(*baseObject).httpResponse.func1
         0     0% 99.83%        6GB 99.82%  github.com/rclone/rclone/backend/drive.(*baseObject).open
         0     0% 99.83%        6GB 99.87%  github.com/rclone/rclone/fs.pacerInvoker
         0     0% 99.83%        6GB 99.83%  github.com/rclone/rclone/fs/fshttp.(*Transport).RoundTrip
         0     0% 99.83%        6GB 99.82%  github.com/rclone/rclone/fs/operations.(*ReOpen).open
         0     0% 99.83%        6GB 99.82%  github.com/rclone/rclone/fs/operations.Copy
         0     0% 99.83%        6GB 99.82%  github.com/rclone/rclone/fs/operations.NewReOpen
         0     0% 99.83%        6GB 99.82%  github.com/rclone/rclone/fs/sync.(*syncCopyMove).pairCopyOrMove
         0     0% 99.83%        6GB 99.86%  github.com/rclone/rclone/lib/pacer.(*Pacer).Call
         0     0% 99.83%        6GB 99.87%  github.com/rclone/rclone/lib/pacer.(*Pacer).call
         0     0% 99.83%        6GB 99.83%  golang.org/x/oauth2.(*Transport).RoundTrip
         0     0% 99.83%        6GB 99.83%  net/http.(*Client).Do (inline)
         0     0% 99.83%        6GB 99.83%  net/http.(*Client).do
         0     0% 99.83%        6GB 99.83%  net/http.(*Client).send
         0     0% 99.83%        6GB 99.83%  net/http.send
         0     0% 99.83%        6GB 99.83%  net/http/httputil.DumpResponse
         0     0% 99.83%        6GB 99.83%  net/http/httputil.drainBody

Link to PPROF SVG -
https://drive.google.com/file/d/1ffAsmSMSG-JLsnyhdkfU7r2pstz1OUPc/view?usp=sharing

Thanks for the profile, very useful.

I'm pretty sure the cause of this is the --dump responses flag - I've seen memory leaks with that flag before.

Can you remove it?

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.