Testing for new vfs cache mode features

I'll put that on the list!

It seems like the win64 build is breaking for some unrelated reason... Will update when I've worked out what is going on!

Great!

Yes cache invalidation is one of the two hard problems in computer science isn't it?

That is a great improvement - thanks for sharing that :slight_smile:

What difference is there in the mount commands for before and after?

2 Likes

Thanks for all the work improving this. Will I have issues testing this beta using my old mount command? I seem to have a lot of issues with Plex scans causing all files in the drive to show: ERROR : IO error: open file failed: googleapi: Error 403: The download quota for this file has been exceeded., downloadQuotaExceeded

Rclone mount --allow-other --cache-dir "D:\rclone_cache" --dir-cache-time 72h0s --max-read-ahead 512k --no-checksum --no-modtime --read-only --vfs-cache-max-age 6h0s --vfs-cache-mode writes --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off --buffer-size 128M --vfs-cache-poll-interval 5m0s --poll-interval 30m0s --log-level INFO --log-file D:\rclone_logs\rclone.log gdrive: C:\ServerApps\teamdrives\me3

No it should be compatible, you if you try the beta you will probably want to change --vfs-cache-mode writes to --vfs-cache-mode full to get the full benefit.

The only change I made to the mount command was changing --vfs-cache-mode writes to --vfs-cache-mode full.

The command I use is like this:
rclone mount gcrypt: /mnt/gcrypt --uid=1000 --gid=1000 --umask=002 --timeout=1h --tpslimit=10 --user-agent=RandomAgent --dir-cache-time=1440m --vfs-cache-mode=full --vfs-cache-max-age=72h --vfs-cache-max-size=200G --vfs-read-chunk-size-limit=off --vfs-read-chunk-size=128M --buffer-size=1024M --rc

1 Like

I'm using the previous version in this latest beta and everything is flowing well here. I'm using the flags below.
--allow-other
--allow-non-empty
--poll-interval 1m
--gid=1000
--uid=1000
--umask 002
--user-agent "6xXThq9bnCDN4NMJ"
--tpslimit 10
--dir-cache-time 2m
--vfs-cache-mode full
--vfs-cache-poll-interval 1m
--vfs-cache-max-size 40G
--vfs-cache-max-age 168h
--vfs-read-chunk-size 32M
--vfs-read-chunk-size-limit 1G
--buffer-size 16M
--log-file /opt/cache-rclone.log
--log-level NOTICE
If everything is keeping as it is, I believe it will be a matter of time to implement this in production. Congratulations on the work

1 Like

OK I finally got the Windows build to build! This is the same as the previous beta but with Windows! The windows exe files should have icons now too :slight_smile:

https://beta.rclone.org/branch/v1.52.1-120-gfcc32a46-vfs-beta/

Great!

Thank you :slight_smile:

1 Like

Can someone confirm that it is normal behaviour that the cache folder should contain a folder for the mount in vfs and vfsMeta, but there is no files contained in either?

You should have files in them. Or have you very short timeouts, so the cache is empty pretty quickly ?

Nevermind, I just switched cache mode to full and now I see it. I thought I'd already done that, apologies.

I am noticing that there are files that show as their full size (ex a 2GB video file inside the cache folder) even through rclone has only accessed the first few MB when scanning into Plex. Seems this will fill up the cache very quickly unnecessarily if it thinks that file is 2GB rather than the downloaded 10MB.

Size is "full", but "size on disk" should show the real data downloaded.

2020/06/19 17:15:15 ERROR : Documents/Untitled document.docx: Failed to copy: can't update google document type without --drive-import-formats
2020/06/19 17:15:15 DEBUG : Documents/Untitled document.docx: >_store: err=vfs cache: failed to transfer file from cache to remote: can't update google document type without --drive-import-formats

I'm not sure why vfs cache is not able to upload a file regardless of the format. A file is a file. It should upload.

Debug Mount Settings:

/usr/bin/rclone mount -vv  --progress --stats 30s      --allow-other         --cache-dir /var/cache/rclone         --config /etc/rclone.conf         --drive-use-trash         --fast-list                  --uid 65534         --gid 100         --umask 007         --vfs-cache-mode full         --vfs-cache-poll-interval 1m         --vfs-cache-max-age 8760h         --vfs-read-chunk-size 32M         GoogleDrive: /srv/GoogleDrive

Had to use:

--drive-skip-gdocs

I certainly think this should be a default. Didn't have any issues with Cache backend prior.

Google docs aren't really files. Rclone has to export them so if you want to save them you need to reimport them.

Skipping them is probably a good plan for the moment.

I'd like them to work but there is a bit more work to do.

I don't know the details, but they are certainly real files in my drive (cat file.docx show a bunch of binary data) even if they just end up being a reference to the google docs real file.

I'm running in full mode now, and might have hit another corner case, but it will take some work for me to replicate again. During a post-processing operation, a "No such file or directory" error was reported right after moving over the file, before the upload started.

I think the key piece here may be that the cache wouldn't accommodate the moved file without some purging happening. Limit is set to 40GB, here's the log including the post-processing entries where the move is started, rclone purging some files, the post processor unable to read the file it just moved, and the eventual completion of the file copy.

2020/06/19 12:29:11 INFO  : Cleaned the cache: objects 107 (was 107) in use 1, to upload 0, uploading 0, total size 39.037G (was 39.020G)
2020/06/19 12:30:11 INFO  : Cleaned the cache: objects 107 (was 107) in use 1, to upload 0, uploading 0, total size 39.055G (was 39.037G)

Jun 19 12:30:52 HOST CouchPotato.py[359]: 06-19 12:30:52 INFO [tato.core.plugins.renamer] Moving "/data/download/process/REDACTED.mkv" to "/data/media/REDACTED.mkv"

2020/06/19 12:31:11 INFO  : REDACTED2.mp4: vfs cache: removed cache file as over quota
2020/06/19 12:31:11 INFO  : REDACTED2.mp4: vfs cache: removed metadata from cache as over quota
2020/06/19 12:31:11 INFO  : Cleaned the cache: objects 107 (was 108) in use 2, to upload 0, uploading 0, total size 39.496G (was 39.055G)
2020/06/19 12:32:11 INFO  : REDACTED3.mp4: vfs cache: removed cache file as over quota
2020/06/19 12:32:11 INFO  : REDACTED3.mp4: vfs cache: removed metadata from cache as over quota
2020/06/19 12:32:11 INFO  : REDACTED4.mp4: vfs cache: removed cache file as over quota
2020/06/19 12:32:11 INFO  : REDACTED4.mp4: vfs cache: removed metadata from cache as over quota
2020/06/19 12:32:11 INFO  : REDACTED5.mp4: vfs cache: removed cache file as over quota
2020/06/19 12:32:11 INFO  : REDACTED5.mp4: vfs cache: removed metadata from cache as over quota
2020/06/19 12:32:11 INFO  : REDACTED6.mp4: vfs cache: removed cache file as over quota
2020/06/19 12:32:11 INFO  : REDACTED6.mp4: vfs cache: removed metadata from cache as over quota
2020/06/19 12:32:11 INFO  : Cleaned the cache: objects 103 (was 107) in use 2, to upload 0, uploading 0, total size 39.714G (was 39.496G)

Jun 19 12:32:38 HOST CouchPotato.py[359]: 06-19 12:32:38 ERROR [tato.core.plugins.renamer] Couldn't move file "/data/download/process/REDACTED.mkv" to "/data/media/REDACTED.mkv": Traceback (most recent call last):
Jun 19 12:32:38 HOST CouchPotato.py[359]: OSError: [Errno 2] No such file or directory: '/data/media/REDACTED.mkv'

2020/06/19 12:33:11 INFO  : REDACTED7.mp4: vfs cache: removed cache file as over quota
2020/06/19 12:33:11 INFO  : REDACTED7.mp4: vfs cache: removed metadata from cache as over quota
2020/06/19 12:33:11 INFO  : REDACTED8.mp4: vfs cache: removed cache file as over quota
2020/06/19 12:33:11 INFO  : REDACTED8.mp4: vfs cache: removed metadata from cache as over quota
2020/06/19 12:33:11 INFO  : Cleaned the cache: objects 101 (was 103) in use 1, to upload 0, uploading 1, total size 39.803G (was 39.714G)
2020/06/19 12:34:11 INFO  : Cleaned the cache: objects 101 (was 101) in use 1, to upload 0, uploading 1, total size 39.803G (was 39.803G)
2020/06/19 12:35:11 INFO  : Cleaned the cache: objects 101 (was 101) in use 1, to upload 0, uploading 1, total size 39.803G (was 39.803G)
2020/06/19 12:36:11 INFO  : Cleaned the cache: objects 101 (was 101) in use 1, to upload 0, uploading 1, total size 39.803G (was 39.803G)
2020/06/19 12:37:11 INFO  : Cleaned the cache: objects 101 (was 101) in use 1, to upload 0, uploading 1, total size 39.803G (was 39.803G)
2020/06/19 12:38:11 INFO  : Cleaned the cache: objects 101 (was 101) in use 1, to upload 0, uploading 1, total size 39.803G (was 39.803G)
2020/06/19 12:38:16 INFO  : REDACTED.mkv: Copied (new)
2020/06/19 12:38:16 INFO  : REDACTED.mkv: vfs cache: upload succeeded try #1

This happened 3 times actually, with all files in question being > 10GB, so extra time for file move/copy operations may have factored in to this. Is that enough to go on @ncw? I wasn't running with debug logging on at the time, and there were no errors at the INFO level. This was with build v1.52.1-120-gfcc32a46-vfs-beta and the following mount options:

  --attr-timeout 0 \
  --dir-cache-time 24h \
  --poll-interval 15s \
  --vfs-cache-mode full \
  --vfs-cache-max-age 6h \
  --vfs-cache-max-size 40G \
  --fast-list \
  --drive-skip-gdocs \
  --drive-use-trash=false \
  --drive-disable-http2=false \
  --allow-other \
  --umask 002 \
  --gid 1001 \
  --uid 1000 \
  --rc \
  --use-mmap \
  --log-level INFO

The sequence of events is this

  • start a copy into the cache
  • a cache expire runs
  • you get a file not found error

Is that right?

How do you move files into the cache? Is there a rename involved,

The cache didn't delete the file you were moving in did it?

I'm playing with this now, and I noticed funny behavior during a google "download quota exceeded" error. Reads just hang, and core/stats shows this:

{
        "bytes": 0,
        "checks": 0,
        "deletes": 0,
        "elapsedTime": 25.558658748,
        "errors": 661,
        "fatalError": false,
        "lastError": "vfs reader: failed to write to cache file: open file failed: googleapi: Error 403: The download quota for this file has been exceeded., downloadQuotaExceeded",
        "renames": 0,
        "retryError": true,
        "speed": 0,
        "transferring": [
                {
                        "bytes": 0,
                        "eta": null,
                        "group": "global_stats",
                        "name": "<redacted>",
                        "percentage": 0,
                        "size": 4468654463,
                        "speed": 0,
                        "speedAvg": 0
                }
        ],
        "transfers": 0
} 

I guess I would have expected this to return an IO Error, same as if VFS wasn't enabled. I'm not sure if this is new, as I wasn't actually using vfs before trying the new full mode.

Edit: And I have an ffprobe instance hung waiting on disk now.

Edit 2: Now unbanned. ffprobe completed after hanging for the couple hour wait.

I love sparse files :slight_smile:

% du -hs ~/.cache/rclone/vfs
17G     /home/frank/.cache/rclone/vfs
% du -hs --apparent-size ~/.cache/rclone/vfs
23G     /home/frank/.cache/rclone/vfs

The cache expire could certainly be the culprit here, the file is large enough that it's almost a given that the expire task will run while it's still moving.

This is being done in a Python 2.7 app, using shutil.move(), which looking at the docs, when moving across filesystems, will actually do a copy, followed by an attempt to set the same file attributes as the source file. So that could be exacerbating things. But I don't believe any rename is done once the file is on the destination.

Finally, I don't believe that this file was reaped from the cache, as there would have been plenty of files with an older access time, and later in the logs, I do see the removal of this file from the cache.

I'll try and replicate this with a different cache path and a small max-cache size and a large enough file with debugging on and see what I can find.

1 Like

What do you think rclone should do here? At the moment it will keep trying forever and it did succeed eventually! Maybe it should error out after a while? After so many consecutive errors...?

I always thought once you hit that a mount, pretty much life was over.

Not sure if others have different experiences, but I've never seen that error come out.

I guess it should error out as it won't work until quotas reset.