I copied a folder about 40G in size over to /tmp on Google Drive, through an rclone sync command pointed at the crypt rclone. After verifying the files were present, I moved it from /tmp to its final location on GDrive, at which point just the folder copied, with no files. The rclone logs show nothing from this time, except the following line:
2019/12/01 18:19:38 INFO : TV Shows/Warehouse 13: received cache expiry notification
The reason I set up two sets of caches/crypts is so that one can be used by Plex for reading, while I had planned on using the other one for syncing data, given that when I had tried just the one cache, and directly copying, I had run into similar issues with files being copied incorrectly.
I'd like you to still confirm that re-mounting makes the files show up...
EDIT: I see you just did - thanks.
But in any case, you can't have your cache set up like that.
You are using both a VFS and a cache-backend layer here. You can not then set the VFS timeouts ( --dir-cache-time 1000h --attr-timeout 1000h) to be higher than the cache-backends info-age.
That is quite sure to cause some wonky cache glitches
The VFS timeut should be left at their defaults (at most)
That will cause the VFS to ask the cache backend for an update as needed - which can happen locally.
Control your caching time with info-age as long as you still use the cache-backend.
If you fix this I suspect it will solve your problem.
info_age in the config is the same as --cache-info-age as a flag in the command
All backend flags (ie. flags that are part of a spesific remote and not just a part of rclone generally) have both config variants and flag variants of all commands. If you use both (which you probably should avoid normally to avoid confusion) the flag will override the config. Note that the format is a little different. The docs always tell you what both variants are if you look closely.
General rclone flags like say... --fast-list or --include currently have no way of being set via config (yet... this may change at some point according to NCW, although probably not very soon).
The gist of this advice is that you understand that there are two caching layers because you use 2 cache systems. The VFS is used anytime you have a mount, but you have also added the cache-backend. Because the cache-backend is the layer closest to the cloud here, it will be getting updates and thus it is the one that should be used pretty much exclusively (ie. the VFS's exipiry timers should be low, and at least no higher than the defaults). There is no benefit from using 2 caching layers. Becuase your VFS-layer here was holdnig on to info for a long time without asking for updates from the cache-backend it was becoming out-of-date and not picking up the changes - thus rendering files "invisible" to you.
the info-age of cache-backend does the same job as both the attr and cache-dir flags in the VFS.
Well, you don't need it to fix this spesific problem if you just fix the settings I mentioned.
But do I recommend the cache-backend? It really depends on your spesific use-case(s) - which I don't think you have elaborated upon.
The biggest benefit of the cache-backend is that it is (currently, until further VFS improvements) the only way to get a read-cache with rclone (VFS only does write-cache).
However it also brings with it several downsides.
I can't answer if that tradeoff is worth it or not without knowing what you are trying to optimize for.
All I can say is that the cache-backend is particular tool for a particular goal - not a "make everything better so I should add it because it exists". A lot of people make that mistake.
Currently I do not use it (for very general "everything" storage). Nor does Animosity (for Plex).
I will be very glad to elaborate (in excruciating length lol) on the details if you wish - but it would be kinda pointless without knowing your goals.
Haha fair enough...and the more detail the better. My primary use would be as storage for Plex, so the idea would be to have Plex read from it, and Sonarr/Radarr/Bazarr write to it.
I currently use a NAS for storage, hence the rclone sync to move existing data off it. My approach so far has been to download to the NAS, then copy over, but I'm trying to see how feasible direct downloads are. It sounds like the RClone + Gdrive should be usable for that purpose.
For Plex - well, it was sort of designed with Plex in mind. It does give you some advantages in that regard - in that it makes a weaker connection a bit more robust against stutter by prefetching data well ahead of the read-point. I imagine it could also be useful for alleviating some of the load from some of Plex's more aggressive metadata scanning if you used a certain RC command to prefetch the fist chunk of all files (as that's where a lot of metadata typically resides).
That said - Animosity is the local Linux/Plex guru here, and he decided against using it in the end, so it is not actually needed if you configure Plex correctly.
I do not use Plex myself yet so I have limited knowledge of the specifics there - but here is a whole thread dedicated to explaining Animosity's personal setup for rclone+Plex, and it's bound to be a goldmine of useful info:
I think ultimately it boils down to "do you really need a read-cache?". If you have a good connection speed (ie. more than sufficient to carry as many streams as you need) you probably don't. You have 10TB download/day and nearly a million API calls in that same timeframe to use. If you can make the software behave nicely this is usually more than enough - and it allows rclone direct access to efficiently read exactly what is needed at any given time rather than involving another abstracted layer of complexity (and potential inefficiencies).
Unless you are trying to make your Plex-setup a large-scale operation to serve dozens of people at the same time... at that point a massive local cache for all the most requested reads starts to make more and more sense...
I'm still seeing the same issue...moving a completed folder from /tmp to /TV Shows on the remote, through the mountpoint (with just the mv command), results in /TV Shows/Roseanne being empty, with just the season folders.
Usually on Gdrive - the reason you can keep timeouts high is because polling will update the info for you (rather than the timeouts forcing a re-fetch of the listings).
So this must be failing somehow... It wuld be very useful if you could use debug logging -vv and look for "changenotify" lines. Which should tell us what polling picks up.
But also - doing a vfs/refresh should definitely pick it up even if polling did not.
I am sure that this is a cache problem, but it does not make sense that you shroud different results on a dismount/remount as a VFS/refresh.
You could - just for testing - try just disabling the long timeouts and verify that this actually fixes the problem before we dig deeper.
Beyond this I think we need to have a look at the debug log to find out what is happening here.
Please check what version you are on
to check easily, run rclone version
I know for a fact that there have been several optimizations and bugfixes for polling recently. if you are not on the latest stable we should check that the issue exists there before anything else.