VFS-Cache with large files to upload to Gdrive

What is the problem you are having with rclone?

I'm downloading large files (30GB) and then copying them to gdrive. Unfortunately the host that downloads these files hasn't got that much space. On it's primary volume it has 14GB to spare, and in a mounted volume it has 32 GB.
The mounted volume is used mainly to store the download, and the primary volume is where rclone mount should cache the download as it writes to gdrive.

This is the command that I'm using:

mount gdrive: /mnt/gdrive --config=/config/rclone/rclone.conf --allow-other --gid ${PGID} --uid ${PUID} --dir-cache-time 1h --buffer-size 256M --log-level ${LOG_LEVEL} --stats 1m --use-mmap --timeout 1h --drive-chunk-size 64M  --vfs-read-chunk-size 32M  --vfs-read-chunk-size-limit 1024M --vfs-cache-mode writes --vfs-cache-max-size 14G

Yes, I do know that --vfs-cache-max-size 14G doesn't work because my files uploaded are much bigger than that, and looks like rclone can't upload incomplete chunks.

So basically, which are my alternatives?. I'm getting 30 MBytes/sec when doing direct upload, but with cache I can go as far as 65 MBytes/sec. Obviously I have to disable cache, but what would be the best config to optimize throughput for large files when uploading them?.

What is your rclone version (output from rclone version)

Rclone 1.53.

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Debian 10.

Which cloud storage system are you using? (eg Google Drive)

Google Drive.

The command you were trying to run (eg rclone copy /tmp remote:tmp)

    mount gdrive: /mnt/gdrive --config=/config/rclone/rclone.conf --allow-other --gid ${PGID} --uid ${PUID} --dir-cache-time 1h --buffer-size 256M --log-level ${LOG_LEVEL} --stats 1m --use-mmap --timeout 1h --drive-chunk-size 64M  --vfs-read-chunk-size 32M  --vfs-read-chunk-size-limit 1024M --vfs-cache-mode writes --vfs-cache-max-size 14G

The rclone config contents with secrets removed.

[gdrive]

type = drive

client_id =

client_secret = 

scope = drive

token =

team_drive =

root_folder_id = 

The cached write is using local storage so you can't compare that upload to uploading to a cloud remote.

I got a 32GB of memory so I use pretty big chunk sizes for upload in my rclone.conf:

chunk_size = 1024M
``

I'm talking about when the cache start flushing. It shows 65 Mbytes/sec when transferring to the remote. Anyway I'll disable cache and start tweaking the chunk_size. Unfortunately I don't have that much RAM to spare.

hi,
perhaps i do not understand what you are doing, but if you are uploading and downloading files.
why not use rclone copy /path/to/local/file gdrive:, then no need for mount or cache?

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.