Vfs directory cache control

What is the problem you are having with rclone?

Hello, I'd like to take control over my vfs cache directory. Whether I use vfs/forget or I unmount the drive, I can see a huge amount of disk used by files
Please can you give me advices about this directory

  • Is it possible to flush this directory without loosing data?
  • Is there a magic rclone rc command to flush it?

What is your rclone version (output from rclone version)

rclone v1.53.1

  • os/arch: darwin/amd64
  • go version: go1.15.1

Which OS you are using and how many bits (eg Windows 7, 64 bit)

MacOS Mojave

Which cloud storage system are you using? (eg Google Drive)

The command you were trying to run (eg rclone copy /tmp remote:tmp)

/usr/local/bin/rclone cmount \
    --dir-cache-time 1000h \
    --log-level INFO \
    --log-file "$LOG_DIR/$CONFIG.log" \
    --poll-interval 15s \
    --umask 002 \
    --user-agent mine \
    --rc \
    --rc-addr :$PORT \
    --rc-no-auth \
    --cache-dir="$CACHE_DIR" \
    --vfs-cache-mode full \
    --vfs-cache-max-size 200G \
    --vfs-cache-max-age 336h \
    --transfers 8 \
    --volname "$VOL_NAME" \
    -o modules=iconv,from_code=UTF-8,to_code=UTF-8-MAC \
    "${CONFIG}:/" "$MOUNT_POINT"

The rclone config contents with secrets removed.

[drive]
type = drive
scope = drive

A log from the command with the -vv flag

Paste  log here

Yes. I recommend doing it when rclone is stopped.

Not yet and there should be!

Fancy making a new issue on github about that?

If you want to cache less stuff then reduce both of these numbers.

1 Like

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.