Rclone mount cache fills to 100% and then becomes unusable

What is the problem you are having with rclone?

using rclone mount with the vfs cache on disk, and when it fills, it becomes unusable (isn't clearing space anymore). perhaps I have bad cmd line options?

What is your rclone version (output from rclone version)

$ rclone version
rclone v1.55.1

  • os/type: linux
  • os/arch: amd64
  • go/version: go1.16.3
  • go/linking: static
  • go/tags: none

Which OS you are using and how many bits (eg Windows 7, 64 bit)

ubuntu 21.04

Which cloud storage system are you using? (eg Google Drive)

gdrive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

/usr/bin/rclone mount --cache-dir /home/spotter/rclone-cache --allow-other --no-modtime --attr-timeout 8700h --stats 10s --buffer-size 16M --vfs-cache-poll-interval 30s --poll-interval 30s --dir-cache-time 8700h --vfs-cache-max-age 96h --async-read=true --vfs-read-wait 30ms --vfs-write-wait 30s --vfs-cache-mode full --vfs-read-chunk-size 16M --vfs-read-chunk-size-limit 1G --local-no-check-updated --drive-chunk-size=64M --multi-thread-streams=4 --multi-thread-cutoff 250M --transfers 8 --drive-disable-http2=true --fast-list --vfs-case-insensitive --vfs-cache-max-size 0.7T --log-level ERROR --log-file=/tmp/rc-mount.log --rc gcrypt: /data/gdrive

The rclone config contents with secrets removed.

don't think this is config dependent.

A log from the command with the -vv flag

don't have vv log, but can see issues here that its out of space

2021/04/30 09:13:59 ERROR : <dir>/<file>: vfs cache: failed to open item: vfs cache item: open mkdir failed: make cache directory failed: mkdir /home/spotter/rclone-cache/vfs/gcrypt/<dir>: no space left on device
2021/04/30 09:13:59 ERROR : <dir>: vfs cache: failed to open item: vfs cache item: open mkdir failed: make cache directory failed: mkdir /home/spotter/rclone-cache/vfs/gcrypt/<dir>: no space left on device

can see storage is fully used up
$ df -h /home/spotter/rclone-cache/
Filesystem Size Used Avail Use% Mounted on
/dev/sdc1 734G 734G 0 100% /home/spotter/rclone-cache

Works fine here :slight_smile:

2021/04/30 07:06:59 INFO : vfs cache: cleaned: objects 8468 (was 8468) in use 0, to upload 0, uploading 0, total size 307.211G (was 307.211G)

But im on Ununtu 20.04

lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 20.04.2 LTS
Release: 20.04
Codename: focal

rclone -V
rclone v1.55.1

  • os/type: linux
  • os/arch: amd64
  • go/version: go1.16.3
  • go/linking: static
  • go/tags: none

is there anything obvious in my command line that would cause problems? is my limit of 0.7T not enough for a 800GB (or 734GiB, I think) drive?

There's no log file so not sure what's going on.

If there are other things using that drive, that would also occupy space so you may want to make it smaller like 600G or somerthing.

Nothing else is using the drive. It's an oracle f80 completely dedicated to being the rclone cache. (22+pbw endurance should cause it to not die when used as a cache).

So nothing obvious in my cmd line then? Ok, guess I'll have to restart it and see how it works with verbose logging. Any recommendations on what I should change my cmd line to?

The log file would be key as there isn't any other way to understand what's going on.

Since you have the flag at ERROR, only errors would be in the log.

You can get some logging turned on and make the size a bit smaller perhaps as I shared above like 600G.

You can see what is using space by using something like du -sh * from /home/yourdirectory

root@gemini:/home/felix# du -sh *
300K	GetScripts
8.0K	bin
1.5M	logs
8.8M	scripts
4.0K	test

I'm not at the computer now but last I checked it was literally just vfs and vfsMeta, i.e. nothing else.

The max size isn't a hard limit as it can go over the limit for a small period of time while it cleans itself up.

I try to keep a bit of buffer as my disk is 916G and my max size 750G to leave some room for any issues.

I'd say in my case, once it hits the limit, it never cleans up. I'll try to run with -vv in the next few days when I'm back at the box and can stop and restart everything.

I don't think that's the case as it would be strange if it broke just for you :slight_smile:

I think you are configured a bit too close to your disk size and ran out of room.

If you have a log with the error, that's all that's needed if you can recreate it.

hmm, so now running with vv (and not just ERROR) I can't reproduce it, while it was happening all the time before.

so still not having problems, (removed the -vv, but kept logging at normal level not just ERROR). With all that said, I thin its problematic if the device fills up and rclone mount begins to choke. if a cache operation fails due to to ENOSPC it should automatically try to clear some space, IMO.

Without seeing any logs on what happened, it's best to not guess.

I have similar issues. I have Plex running using rclone mount readonly and many times now I find I have to reboot my whole system (running on AWS) and when I log back in the cache is at 100% (actually 1 Gb bigger than my max). I clean the cache by hand (using rm -fr) restart rclone and Plex and then the same thing happens again and again.

My rclone mount command is:

rclone mount --vfs-cache-max-size 20G --vfs-cache-max-age 720h --read-only --no-checksum --daemon-timeout 2h --allow-other --daemon --no-modtime --vfs-read-chunk-size 5M --vfs-read-chunk-size-limit 0 --buffer-size 30M --vfs-read-ahead 200M --vfs-cache-mode full OneDrive: /mnt/onedrive

So this is a thing.

hello,

perhaps remove --vfs-cache-max-age and use the default value.
if you are limited in free space, have you tried without using a vfs cache.
often it is not needed to stream media

The max size can go over as it's not a hard limit and you also, included no log file either so it's impossible to tell you why.

The majority of time, folks have cache sizes set to small for what they are trying to accomplish and do not give it the proper space needed to function.

If you'd like to share a log file, it's easy to answer.

what is the default value for cache-max-age?

where is the log file located?

documented here

use a log file, --log-level=DEBUG --log-file=/path/to/log.txt

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.