VFS cache userRateLimitExceeded handling

I’ve noticed that with the writes vfs option enabled, when you reach a failed to transfer file from cache to remote: googleapi: Error 403: User rate limit exceeded., userRateLimitExceeded error for going over the daily limit, filenames will appear in the folder they were supposed to be moved to and deleted from the writes temp folder. But on trying to access the file will result in an error. Do these files exist on the local hard drive? Or is it an error on the cache database? When I restart rclone, they disappear. Are they supposed to be retried somehow automatically?

What’s your mount that you are using?

I currently use this as my mount setup

rclone mount gmedia: /folder/gmedia \
        --config=/folder/.config/rclone/rclone.conf \
        --bind 1.2.3.4 \
        --drive-chunk-size=512M \
        --cache-dir /folder/.cache/rclone \
        --dir-cache-time 48h \
        --vfs-cache-max-age 1m \
        --vfs-read-chunk-size 64M \
        --vfs-read-chunk-size-limit off \
        --vfs-cache-mode writes \
        --buffer-size 512M \
        --umask 077 \
        --log-level INFO \
        --log-file=/folder/logs/rclone.log

That’s almost identical to me. I’m been running a lot of xfers up as I grabbed a few shows I was missing. What does the log look like?

I had my uploads copied over and spread out as they uploaded from the looks:

Jul 12 17:29:17 gemini rclone[2384]: TV/Band.of.Brothers/Band.of.Brothers.S01E01.mkv.partial~: Copied (new)
Jul 12 17:36:26 gemini rclone[2384]: TV/Band.of.Brothers/Band.of.Brothers.S01E01.mkv: Copied (new)
Jul 12 17:41:50 gemini rclone[2384]: TV/Band.of.Brothers/Band.of.Brothers.S01E02.mkv.partial~: Copied (new)
Jul 12 17:50:34 gemini rclone[2384]: TV/Band.of.Brothers/Band.of.Brothers.S01E03.mkv.partial~: Copied (new)
Jul 12 17:56:54 gemini rclone[2384]: TV/Band.of.Brothers/Band.of.Brothers.S01E04.mkv.partial~: Copied (new)
Jul 12 18:02:33 gemini rclone[2384]: TV/Band.of.Brothers/Band.of.Brothers.S01E05.mkv.partial~: Copied (new)
Jul 12 18:08:30 gemini rclone[2384]: TV/Band.of.Brothers/Band.of.Brothers.S01E06.mkv.partial~: Copied (new)
Jul 12 18:17:12 gemini rclone[2384]: TV/Band.of.Brothers/Band.of.Brothers.S01E07.mkv.partial~: Copied (new)
Jul 12 18:22:41 gemini rclone[2384]: TV/Band.of.Brothers/Band.of.Brothers.S01E08.mkv.partial~: Copied (new)
Jul 12 18:27:52 gemini rclone[2384]: TV/Band.of.Brothers/Band.of.Brothers.S01E09.mkv.partial~: Copied (new)
Jul 12 18:33:09 gemini rclone[2384]: TV/Band.of.Brothers/Band.of.Brothers.S01E10.mkv.partial~: Copied (new)

If you are using just cache-mode writes, it retries until it fails. If it fails, the file is gone from my understanding as it won’t be retried and I’m assuming it’ll leave the cache-dir once the age expires.

--vfs-cache-mode writes
In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first.

This mode should support all normal file system operations.

If an upload fails it will be retried up to --low-level-retries times.

This is what I’ve been seeing it do. What’s weird is the caching part, where it would cache the filenames, filesize and the date from the original file even though the file failed to upload, it still appears in the folder it would be uploaded to but not actually exist on Google Drive.
And all of the files were gone from the VFS cache writes folder, nothing was in that. I’m guessing it added them to the database but not before checking to see if the file was successfully uploaded. Then on restarting rclone, the DB gets reset? If it’s only stored in cache since I can’t find a local db file like with rclone cache.

: Copied (new)
: Copied (new)
: Failed to copy: googleapi: Error 403: User rate limit exceeded., userRateLimitExceeded
: failed to transfer file from cache to remote: googleapi: Error 403: User rate limit exceeded., userRateLimitExceeded
: RWFileHandle.Flush error: failed to transfer file from cache to remote: googleapi: Error 403: User rate limit exceeded., userRateLimitExceeded
: Failed to copy: googleapi: Error 403: User rate limit exceeded., userRateLimitExceeded
: failed to transfer file from cache to remote: googleapi: Error 403: User rate limit exceeded., userRateLimitExceeded
: RWFileHandle.Flush error: failed to transfer file from cache to remote: googleapi: Error 403: User rate limit exceeded., userRateLimitExceeded
: Failed to copy: googleapi: Error 403: User rate limit exceeded., userRateLimitExceeded

So I’m trying to read through:

and if I’m understanding it, it’ll read the files in the cache dir and purge the files older than the cache-age.

If you turn up the debug, you should see the cache messages in the debug logs when they get removed.

The uploads are separate and related back to the backend that’s configured. The reason the file gets deleted as once if fails, the cache above removes the file since the two pieces aren’t really 'sync’ed.

@ncw - keep me honest if I’m reading code right :slight_smile:

That is correct.

If the upload fails it will just sit in the cache until the cache age expires.

This isn’t ideal, I agree and it is the issue discussed here: https://github.com/ncw/rclone/issues/2382