Serve webdav: caching problem (setting dir-cache-time not working)

What is the problem you are having with rclone?

I want to backup my Synology NAS with Hyper Backup to my google drive account (teamdrive!). I used the rclone serve webdav command (see below) to start my own webdav server of my google drive and choosed this webdav server as target for my backup. I startetd the backup sequence and it works, but every ~5 minutes the backup upload stops and my rclone got busy to cache a lot of files from a directory. Caching the folder takes atm about 2 minutes, but takes longer every time, because the backup process adds new files after that. This is the message which comes for all the 1500 files of the directory within the 2 minutes:

DEBUG : <folder>: list: cached object: <file1>
DEBUG : <folder>: list: cached object: <file2>
DEBUG : <folder>: list: cached object: <...>
...

I thought "--dir-cache-time" would be the solution. But if i use "--dir-cache-time 12h" it seems it will be ignored. I doesnt change anything. Does anyone have an idea that could help me? :slight_smile:

What is your rclone version (output from rclone version)

rclone v1.53.1
os/arch: linux/amd64

Which OS you are using and how many bits (eg Windows 7, 64 bit)

linux (synology os)

Which cloud storage system are you using? (eg Google Drive)

Google Drive shared Teamdrive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone serve webdav gcache:hyper --config /volume1/.../rclone.conf --addr localhost:8XXX --user XXX --pass XXX --cache-chunk-path="/volume1/.../CHUNKS" --cache-db-path="/volume1/.../DB" --dir-cache-time 12h --fast-list -vv

The rclone config contents with secrets removed.

[gdrive]
type = drive
client_id = 
client_secret = 
scope = drive
token = 
team_drive = 

[gcache]
type = cache
remote = gdrive:/gdrive
info_age = 1d
chunk_total_size = 10G

A log from the command with the -vv flag

2020/11/14 17:40:14 DEBUG : Using config file from "/volume1/.../rclone.conf"
2020/11/14 17:40:14 DEBUG : Creating backend with remote "gcache:hyper"
2020/11/14 17:40:14 DEBUG : Creating backend with remote "gdrive:/gdrive/hyper"
2020/11/14 17:40:15 DEBUG : fs cache: renaming cache item "gdrive:/gdrive/hyper" to be canonical "gdrive:gdrive/hyper"
2020/11/14 17:40:15 DEBUG : gcache: wrapped gdrive:gdrive/hyper at root hyper
2020/11/14 17:40:15 INFO  : gcache: Cache DB path: /volume1/.../DB/gcache.db
2020/11/14 17:40:15 INFO  : gcache: Cache chunk path: /volume1/.../CHUNKS/gcache
2020/11/14 17:40:15 INFO  : gcache: Chunk Memory: true
2020/11/14 17:40:15 INFO  : gcache: Chunk Size: 5M
2020/11/14 17:40:15 INFO  : gcache: Chunk Total Size: 10G
2020/11/14 17:40:15 INFO  : gcache: Chunk Clean Interval: 1m0s
2020/11/14 17:40:15 INFO  : gcache: Workers: 4
2020/11/14 17:40:15 INFO  : gcache: File Age: 1d
2020/11/14 17:40:15 DEBUG : Adding path "cache/expire" to remote control registry
2020/11/14 17:40:15 DEBUG : Adding path "cache/stats" to remote control registry
2020/11/14 17:40:15 DEBUG : Adding path "cache/fetch" to remote control registry
2020/11/14 17:40:15 DEBUG : Cache remote gcache:hyper: subscribing to ChangeNotify
2020/11/14 17:40:15 INFO  : Using --user XXX --pass XXXX as authenticated user
2020/11/14 17:40:15 NOTICE: Cache remote gcache:hyper: WebDav Server started on http://127.0.0.1:8XXX
2020/11/14 17:40:16 DEBUG : /: OpenFile: flags=O_RDONLY, perm=----------
2020/11/14 17:40:16 DEBUG : /: >OpenFile: fd=/ (r), err=<nil>
2020/11/14 17:40:16 DEBUG : /: OpenFile: flags=O_RDONLY, perm=----------
2020/11/14 17:40:16 DEBUG : /: >OpenFile: fd=/ (r), err=<nil>
2020/11/14 17:40:16 INFO  : /: PROPFIND from 127.0.0.1:36668
2020/11/14 17:40:16 DEBUG : Cache remote gcache:hyper: list ''
2020/11/14 17:40:16 DEBUG : : list: cold listing: 2020-11-13 17:38:14.790661432 +0100 CET

<LATER:>
DEBUG : <folder>: list: cached object: <file1>
DEBUG : <folder>: list: cached object: <file2>
DEBUG : <folder>: list: cached object: <...>

You are using the cache backend which is deprecated and has it's own set of settings.

It usees:

and that's the setting are you bumping against. You'd probably want to check out the new vfs-cache-mode full instead of using the cache backend.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.