Upload on drive mount exceeding vfs cache

What is the problem you are having with rclone?

Rclone mount has --vfs-cache-mode writes and --vfs-cache-max-size 50M yet when copying a 60GB file into the mount, it is using 20+ GB of storage in /root/.cache/rclone/..

I have limited storage space and would like the cache to be very low or non-existent, so long as the upload can complete. Can someone please advise on how I can achieve this?

What is your rclone version (output from rclone version)

rclone v1.53.1

  • os/arch: linux/amd64
  • go version: go1.13.7

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Ubuntu 18.04.4 LTS, 64bit

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone -vv --vfs-cache-mode writes --vfs-cache-max-size 50M --drive-chunk-size 32M --tpslimit 8 mount googledrive:backups/ /mnt/rclone --allow-other --umask 000

cp template-2020-10-01-1300.zip /mnt/rclone

The rclone config contents with secrets removed.

[googledrive]       
type = drive                                                      
scope = drive                   
service_account_file = /root/.config/rclone/backups.json
team_drive = REMOVED



A log from the command with the -vv flag




2020/10/01 13:23:25 DEBUG : scans/template-2020-10-01-1300.zip: Sending chunk 36003905536 length 33554432
2020/10/01 13:23:26 DEBUG : scans/template-2020-10-01-1300.zip: Sending chunk 36037459968 length 33554432
2020/10/01 13:23:27 DEBUG : scans/template-2020-10-01-1300.zip: Sending chunk 36071014400 length 33554432
2020/10/01 13:23:27 DEBUG : scans/template-2020-10-01-1300.zip: Sending chunk 36104568832 length 33554432
2020/10/01 13:23:28 DEBUG : scans/template-2020-10-01-1300.zip: Sending chunk 36138123264 length 33554432
2020/10/01 13:23:29 DEBUG : scans/template-2020-10-01-1300.zip: Sending chunk 36171677696 length 33554432
2020/10/01 13:23:30 DEBUG : scans/template-2020-10-01-1300.zip: Sending chunk 36205232128 length 33554432
2020/10/01 13:23:30 DEBUG : scans/template-2020-10-01-1300.zip: Sending chunk 36238786560 length 33554432
2020/10/01 13:23:31 DEBUG : scans/template-2020-10-01-1300.zip: Sending chunk 36272340992 length 33554432
2020/10/01 13:23:32 DEBUG : scans/template-2020-10-01-1300.zip: Sending chunk 36305895424 length 33554432
2020/10/01 13:23:32 DEBUG : scans/template-2020-10-01-1300.zip: Sending chunk 36339449856 length 33554432
2020/10/01 13:23:33 DEBUG : scans/template-2020-10-01-1300.zip: Sending chunk 36373004288 length 33554432
2020/10/01 13:23:34 DEBUG : scans/template-2020-10-01-1300.zip: Sending chunk 36406558720 length 33554432
2020/10/01 13:23:34 DEBUG : scans/template-2020-10-01-1300.zip: Sending chunk 36440113152 length 33554432
2020/10/01 13:23:35 DEBUG : scans/template-2020-10-01-1300.zip: Sending chunk 36473667584 length 33554432
2020/10/01 13:23:36 DEBUG : scans/template-2020-10-01-1300.zip: Sending chunk 36507222016 length 33554432
2020/10/01 13:23:36 DEBUG : scans/template-2020-10-01-1300.zip: Sending chunk 36540776448 length 33554432
2020/10/01 13:23:37 DEBUG : scans/template-2020-10-01-1300.zip: Sending chunk 36574330880 length 33554432
2020/10/01 13:23:38 DEBUG : scans/template-2020-10-01-1300.zip: Sending chunk 36607885312 length 33554432
2020/10/01 13:23:38 DEBUG : scans/template-2020-10-01-1300.zip: Sending chunk 36641439744 length 30752768
2020/10/01 13:23:40 DEBUG : scans/template-2020-10-01-1300.zip: MD5 = c1a7ba8f5acb7d9824f22328575ebc72 OK
2020/10/01 13:23:40 INFO  : scans/template-2020-10-01-1300.zip: Copied (replaced existing)
2020/10/01 13:23:40 DEBUG : scans/template-2020-10-01-1300.zip: transferred to remote
2020/10/01 13:23:40 DEBUG : &{scans/template-2020-10-01-1300.zip (rw)}: >Flush: err=<nil>
2020/10/01 13:23:40 DEBUG : &{scans/template-2020-10-01-1300.zip (rw)}: Flush:
2020/10/01 13:23:40 DEBUG : scans/template-2020-10-01-1300.zip: File.delWriter couldn't find handle
2020/10/01 13:23:40 DEBUG : scans/template-2020-10-01-1300.zip: Size and modification time the same (differ by -299.272µs, within tolerance 1ms)
2020/10/01 13:23:40 DEBUG : scans/template-2020-10-01-1300.zip: Unchanged skipping
2020/10/01 13:23:40 DEBUG : scans/template-2020-10-01-1300.zip: transferred to remote
2020/10/01 13:23:40 DEBUG : &{scans/template-2020-10-01-1300.zip (rw)}: >Flush: err=<nil>
2020/10/01 13:23:40 DEBUG : &{scans/template-2020-10-01-1300.zip (rw)}: Release:
2020/10/01 13:23:40 DEBUG : scans/template-2020-10-01-1300.zip(0xc0008ee840): RWFileHandle.Release closing
2020/10/01 13:23:40 DEBUG : scans/template-2020-10-01-1300.zip(0xc0008ee840): close:
2020/10/01 13:23:40 DEBUG : scans/template-2020-10-01-1300.zip: File.delWriter couldn't find handle
2020/10/01 13:23:40 DEBUG : scans/template-2020-10-01-1300.zip: Size and modification time the same (differ by -299.272µs, within tolerance 1ms)
2020/10/01 13:23:40 DEBUG : scans/template-2020-10-01-1300.zip: Unchanged skipping
2020/10/01 13:23:40 DEBUG : scans/template-2020-10-01-1300.zip: transferred to remote
2020/10/01 13:23:40 DEBUG : scans/template-2020-10-01-1300.zip(0xc0008ee840): >close: err=<nil>
2020/10/01 13:23:40 DEBUG : &{scans/template-2020-10-01-1300.zip (rw)}: >Release: err=<nil>
2020/10/01 13:23:40 DEBUG : /: Attr:
2020/10/01 13:23:40 DEBUG : /: >Attr: attr=valid=1s ino=0 size=0 mode=drwxrwxrwx, err=<nil>
2020/10/01 13:23:40 DEBUG : /: Lookup: name="scans"
2020/10/01 13:23:40 DEBUG : : Re-reading directory (15m35.581871486s old)

then do not use a the vfs cache

You'd need enough space to hold the biggest file you want to copy as you have writes on so it keeps a local copy before uploading.

You'd either have to remove writes unless it's needed for the write to work or have enough space.

Ah yes thanks guys I thought cache was needed.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.