Google Drive change polling and cache invalidation

https://beta.rclone.org/v1.40-031-ge42cee5e/

This beta should fix the issue replicated here - the parent folder was not expired when an object was received as changed even though it was done in vfs.

2 Likes

The new beta seems to be working great! Thanks for following up on this, remus!! Much appreciated :slight_smile:

I’m running v1.40-061-gd5b2ec32 with a crypt/cache wrapped drive remote mounted, and am still seeing issues with newly created directories not showing in the mount until I did a full expire with a HUP (didn’t try targeting the parent folder through the rc expire command, but assume that would work as well).

For a cached remote, can dir-cache-time be safely lowered, as the mount shouldn’t incur much of a penalty if it’s just hitting the bolt cache? Either way, it seems like a notification from drive should still bubble all the way up through the crypt and cache to the mount, no?

This is a read-only mount merged with a local dir with unionfs as well so I have vfs turned off, and I’m wondering if that may be related. Thoughts?

I’ve switched to debug logging now as well so I’ll have some more details if/when this happens again.

How are the new files getting there? I’d say to use the cache-tmp-upload and ditch the unionfs as it just adds an unneeded layer of complexity.

If you capture the polling change via the debug log, that should point out if there’s an issue.

I’m doing a nightly cronjob that does a combination of rclone copy and rclone move with different exclusion lists and file age limits. I’d still prefer to keep this to control when specifically uploads happen, and to be able to keep some files local all the time. Worst-case scenario, I can have the script trigger rc cache/expire for files/directories that changed.

are you getting a .unionfs in your local dir that’s causing something to be hidden?

If you run the job by hand, you should see the expiration in the debug too if it’s not working properly.

Nope, nothing related to files masked anywhere with a .unionfs folder.

I’ve just manually done a rclone copy of a folder that didn’t exist on the remote when I mounted it, and the next check for changes on drive didn’t seem to find anything that needed doing:

rclone[26169]: 2018/04/10 21:28:01 DEBUG : Google drive root 'Plex': Checking for changes on remote
rclone[26169]: 2018/04/10 21:28:01 DEBUG : Google drive root 'Plex': All changes were processed. Waiting for more.
rclone[26169]: 2018/04/10 21:28:02 DEBUG : Google drive root 'Plex': Checking for changes on remote
rclone[26169]: 2018/04/10 21:28:02 DEBUG : Google drive root 'Plex': All changes were processed. Waiting for more.

rclone rc vfs/forget did nothing as expected since I don’t have VFS enabled.

rclone rc cache/expire remote="existingDirectory/newDirectory" confirmed that this wasn’t in the cache:

2018/04/10 21:33:37 Failed to rc: operation "cache/expire" failed: remote control command failed: 4pd8m573jua2rm9g0iscn82tac/0cr7kkd898l3o6lnmm7lmlvvk8 doesn't exist in cache

The only thing that worked was rclone rc cache/expire remote="existingDirectory/"

My current config for reference:

[gdrive]
type = drive
client_id = REDACTED
client_secret = REDACTED
token = {"access_token":"REDACTED","token_type":"Bearer","refresh_token":"REDACTED","expiry":"2018-04-10T22:07:58.232237126-04:00"}

[gdrive-cache]
type = cache
remote = gdrive:Plex
plex_url = http://127.0.0.1:32400
plex_token = REDACTED
chunk_total_size = 50G

[gdrive-cache-crypt]
type = crypt
remote = gdrive-cache:
filename_encryption = standard
directory_name_encryption = true
password = REDACTED
password2 = REDACTED

# Used for copy/move operations
[crypt]
type = crypt
remote = gdrive:Plex
filename_encryption = standard
password = REDACTED
password2 = REDACTED

And my current mount command /usr/bin/rclone mount gdrive-cache-crypt: /data/.gdrive --read-only --allow-other --dir-cache-time=6h --cache-chunk-size=5M --cache-info-age=168h --cache-workers=8 --buffer-size 0M --attr-timeout=1s --umask 002 --rc --log-level DEBUG --gid 1001 --uid 1000 (dropped dir-cache-time to 6hrs to avoid things going going missing for a week if I didn’t notice, was 168h before)

When you copy the file in, you don’t see any cache expiry notifications?

I just tested and I get:

Apr 10 22:16:02 gemini rclone[4198]: hosts: received cache expiry notification

I’m running:

[felix@gemini gmedia]$ rclone -V
rclone v1.40-040-g6e11a25dβ

I just did a rclone copy /etc/hosts into my GD follow your same config with a crypt secondary config.

Nope, no cache expiry notifications for a new directory. I had notifications from last night’s copy operation when files that were already on the remote were overwritten/updated, but nothing for the copy operation of this new directory and file.

The only thing I’m doing that seems slightly non-standard is having the mount read-only?

Any thoughts on where to look next @ncw?

New directories not showing up is a known issue as mentioned by seuffert, you can follow the issue on github here: https://github.com/ncw/rclone/issues/2155

@talisto - thanks. I was trying with a file and that was working. I forgot that issue with directories was still open. Thanks for pointing that out!

Thanks, I was unclear what the exact issue was that was fixed in the beta 12 days ago.

Does anyone have any tips on mitigation for this issue? Is the only solution to manually expire the parent directory when a new subdirectory is added outside the mount? Or is it possible to lower some of the other mount cache times and not be largely impacted when a cache-wrapped backend is mounted?

I don’t hit the issue as I stopped the rclone copy piece and just use the cache-tmp-upload to handle all my items moving to my GD.

How do you manually expire dir?

If you are running the with the -rc option, you can use something like this:

rclone rc cache/expire remote=/TV/Someshow

1 Like

If you are using it for plex there is a plex_autoscan script that can help and handle the cache expiration when a new show/movie is downloaded from sonarr/radarr. That is what I use to get around the issue for tv show/movies for now. For stuff not from radarr/sonarr I just manually expire the parent cache or manually create the directory.

this mornings beta has a fix for this. Just laoded and so far seems to be working. Thanks to all who worked on this. testing some more today.

You have @B4dM4n to thank for that :smile:

Thanks @B4dM4n. Seems to be working very well.

1 Like