VFS refresh causes Drive list errors

I am running a crypt+VFS rclone mount of my GDrive and think some files exist on my Drive that don’t appear in the mount. I had mistaken the size command for a different one (I thought it was info so I ran it on my vfs remote and it ended up creating files on Drive that weren’t seen by the mount (which makes me think that they weren’t encrypted and are corrupted). I deleted them manually since they were in the main directory but it makes me wonder if there’s more in maybe subdirectories like so gdrive:folder1/folder2/folder3.

Is it possible to find files that exist on GDrive that aren’t in the crypt? Or empty files on the crypt or raw gdrive? Also maybe if there’s a way to show files that give back the error of bad password? with bad blocks?

Can you share a specific example or two?

I haven’t seen an issue where a file is missing but I tend to deal with plex related files rather than little things. The longest delay you should see for something to appear is 1 minute as that’s the polling interval

I wouldn’t recommend doing it but I ran that (incorrect) info instead of the correct size command and it created empty files in the root crypt remote with random character filenames appearing on GDrive but not on the mount even after clearing the cache and remounting. If I can search through the regular gdrive remote before crypt and find empty files, maybe that can fix the problem possibly?

I’m not following as you said you weren’t using the cache backend so why would you ‘clear the cache’?

Can you provide an example of what you are seeing?

My bad, I mean refreshing the VFS cache, not clearing. I run a VFS refresh over rc and it would still show the files. But now after they are deleted, I get errors on the API for listing and it happens with every refresh which makes me think that maybe residual files exist from this command error I made.

The polling would pick up the change though so you wouldn’t refresh.

Can you share logs/examples of what you are seeing?

I don’t have a log unfortunately of it. I remember when I ran it, It came back with some debug info so I quickly control-C’d out of that fast and ended up with the files. I’ll try ls and see if any empty files appear or possibly delete all files made on the day of the error.

Here is a post that talks about finding files not in a crypt but not sure that’s your issue.

If you can capture something to look at, we can help to find the problem.

No errors on executing that command oddly enough. Although before adding a tpslimit, I did get many Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. I have the default quota settings. I’m wondering if maybe that’s the cause of the errors, going over the api limit causing other stuff like playback to come to a crawl. I use analyze on Radarr and Sonarr as well which is nice to have but not if it causes higher transactions per second that make it go over the limit.

403 rate limits are very common. Are you using your own API key? If you don’t limit the rates a bit on Google, you get those. Rclone just retries though.

I am using an my own API key. But with the default quotas, it goes over easily with rclone size.

What do you mean by this? If I decrease the limit, that would negatively affect it right?

You can only ever do 10 transactions per second with Google so you to make sure whatever you are running falls into that.

For my example, I run a very low # of transfers/checkers for my upload each night.

/usr/bin/rclone move /data/local/ gcrypt: -P --checkers 3 --log-file /home/felix/logs/upload.log -v --tpslimit 3 --transfers 3 --drive-chunk-size 32M --exclude-from /home/felix/scripts/excludes --delete-empty-src-dirs

If you go more, you get rate limited which tells rclone to back off, which slows it down. You have to find that sweet spot of just enough but not too many. More isn’t always better.

Ah I see. I thought you had meant adjusting the quotas in the API console of Drive not rclone config.