Data copied via rclone sync disappearing when moved within cloud

I've removed those attributes. You mean info-age in the rclone config, as opposed to another flag in the mount command, correct?

info_age in the config is the same as
--cache-info-age as a flag in the command

All backend flags (ie. flags that are part of a spesific remote and not just a part of rclone generally) have both config variants and flag variants of all commands. If you use both (which you probably should avoid normally to avoid confusion) the flag will override the config. Note that the format is a little different. The docs always tell you what both variants are if you look closely.

General rclone flags like say... --fast-list or --include currently have no way of being set via config (yet... this may change at some point according to NCW, although probably not very soon).

The gist of this advice is that you understand that there are two caching layers because you use 2 cache systems. The VFS is used anytime you have a mount, but you have also added the cache-backend. Because the cache-backend is the layer closest to the cloud here, it will be getting updates and thus it is the one that should be used pretty much exclusively (ie. the VFS's exipiry timers should be low, and at least no higher than the defaults). There is no benefit from using 2 caching layers. Becuase your VFS-layer here was holdnig on to info for a long time without asking for updates from the cache-backend it was becoming out-of-date and not picking up the changes - thus rendering files "invisible" to you.

the info-age of cache-backend does the same job as both the attr and cache-dir flags in the VFS.

Ask for further clarifications as needed :slight_smile:

Thank you. That helps clarify this. I've researched and found some differing advice on this, so would you recommend using just the VFS cache?

Well, you don't need it to fix this spesific problem if you just fix the settings I mentioned.

But do I recommend the cache-backend? It really depends on your spesific use-case(s) - which I don't think you have elaborated upon.

The biggest benefit of the cache-backend is that it is (currently, until further VFS improvements) the only way to get a read-cache with rclone (VFS only does write-cache).
However it also brings with it several downsides.
I can't answer if that tradeoff is worth it or not without knowing what you are trying to optimize for.

All I can say is that the cache-backend is particular tool for a particular goal - not a "make everything better so I should add it because it exists". A lot of people make that mistake.
Currently I do not use it (for very general "everything" storage). Nor does Animosity (for Plex).

I will be very glad to elaborate (in excruciating length lol) on the details if you wish - but it would be kinda pointless without knowing your goals.

Haha fair enough...and the more detail the better. My primary use would be as storage for Plex, so the idea would be to have Plex read from it, and Sonarr/Radarr/Bazarr write to it.

I currently use a NAS for storage, hence the rclone sync to move existing data off it. My approach so far has been to download to the NAS, then copy over, but I'm trying to see how feasible direct downloads are. It sounds like the RClone + Gdrive should be usable for that purpose.

cache-backend will only affect reads.

For Plex - well, it was sort of designed with Plex in mind. It does give you some advantages in that regard - in that it makes a weaker connection a bit more robust against stutter by prefetching data well ahead of the read-point. I imagine it could also be useful for alleviating some of the load from some of Plex's more aggressive metadata scanning if you used a certain RC command to prefetch the fist chunk of all files (as that's where a lot of metadata typically resides).

That said - Animosity is the local Linux/Plex guru here, and he decided against using it in the end, so it is not actually needed if you configure Plex correctly.
I do not use Plex myself yet so I have limited knowledge of the specifics there - but here is a whole thread dedicated to explaining Animosity's personal setup for rclone+Plex, and it's bound to be a goldmine of useful info:

I think ultimately it boils down to "do you really need a read-cache?". If you have a good connection speed (ie. more than sufficient to carry as many streams as you need) you probably don't. You have 10TB download/day and nearly a million API calls in that same timeframe to use. If you can make the software behave nicely this is usually more than enough - and it allows rclone direct access to efficiently read exactly what is needed at any given time rather than involving another abstracted layer of complexity (and potential inefficiencies).

Unless you are trying to make your Plex-setup a large-scale operation to serve dozens of people at the same time... at that point a massive local cache for all the most requested reads starts to make more and more sense...

I made the suggested changes. Here's the new rclone.conf:

type = drive
client_id = (removed)
scope = (removed)
token = {"access_token":(removed),"token_type":"Bearer","refresh_token":(removed),"expiry":"2019-12-04T11:26:49.719313765-08:00"}

type = crypt
remote = abhiplex:/Media/crypt
filename_encryption = standard
directory_name_encryption = true
password = (removed)
password2 = (removed)

I have the old cache configs in there, but they shouldn't be in use if I have everything pointing correctly

Mount command:

rclone mount gcryptdirect:/ /mnt/gdrive --umask 000 --allow-other --attr-timeout 1000h --buffer-size 32M --dir-cache-time 1000h --poll-interval 15s --timeout 1h --rc --rc-addr --log-level INFO --log-file /srv/rclone/logs/rclone.log

Moving files using rclone sync:

rclone sync /mnt/media/TV\ Shows/Roseanne/ gcryptdirect:/tmp/Roseanne --drive-chunk-size 32M --transfers 1 --checkers 1 --create-empty-src-dirs -v --log-file /mnt/media/roseannerclonesync.txt &

I'm still seeing the same issue...moving a completed folder from /tmp to /TV Shows on the remote, through the mountpoint (with just the mv command), results in /TV Shows/Roseanne being empty, with just the season folders.

Remounting fixes it, but a vfs/refresh doesn't:

rclone rc vfs/refresh recursive=true --rc-addr

Am I still missing something?

Can you peel it back and use 1 file and walk through the example as you only seem to be sharing something doesn't work without any steps.

Take 1 file.

ls local path
rclone move it to your remote
rclone ls remotepath
ls on the mounted drive

Good point, thanks for the pointer.

Created a new directory+subdirectory on my NAS, with one file in it, to mirror the level of nesting, to move.

abhishek@dragon:/mnt/media$ ls test/test2


rclone move /mnt/media/test/ gcryptdirect:/tmp/test --create-empty-src-dirs -v --log-file /mnt/media/rclonetestmove.txt

rclone sees it:

abhishek@dragon:/mnt/media$ rclone ls gcryptdirect:/tmp
   11 test/test2/test.txt

Visible on the mountpoint:

abhishek@dragon:/mnt/gdrive/tmp$ ls test/test2

Moved to the new folder, where I've been seeing this issue:

abhishek@dragon:/mnt/gdrive/tmp$ mv test/ ../TV\ Shows/

File visible:

abhishek@dragon:/mnt/gdrive/TV Shows/test/test2$ ls -al
total 0

This is where the issue occurs

 abhishek@dragon:/mnt/media$ rclone ls gcryptdirect:/TV\ Shows/test
   11 test2/test.txt

The data exists, but isn't visible at the mountpoint

But, when repeated with the original file being just


With one fewer level of nesting, I don't see the issue.

Can you put the mount in debug and share the log with the issue?


Usually on Gdrive - the reason you can keep timeouts high is because polling will update the info for you (rather than the timeouts forcing a re-fetch of the listings).
So this must be failing somehow... It wuld be very useful if you could use debug logging -vv and look for "changenotify" lines. Which should tell us what polling picks up.

But also - doing a vfs/refresh should definitely pick it up even if polling did not.
I am sure that this is a cache problem, but it does not make sense that you shroud different results on a dismount/remount as a VFS/refresh.

You could - just for testing - try just disabling the long timeouts and verify that this actually fixes the problem before we dig deeper.
Beyond this I think we need to have a look at the debug log to find out what is happening here.

Please check what version you are on
to check easily, run rclone version
I know for a fact that there have been several optimizations and bugfixes for polling recently. if you are not on the latest stable we should check that the issue exists there before anything else. is the log file with the issue, with debug logging enabled.

2019/12/04 14:31:08 is mostly it scanning every TV show directory, so can be skipped I think, but I've included it for completeness.

abhishek@dragon:/srv/rclone/logs$ rclone version
rclone v1.49.5
- os/arch: linux/amd64
- go version: go1.12.10

Running an rclone refresh:

rclone rc vfs/refresh recursive=true --rc-addr

did not fix this. Here's what's in the logfile:

2019/12/04 14:47:52 DEBUG : rc: "vfs/refresh": with parameters map[recursive:true]
2019/12/04 14:47:52 DEBUG : : Reading directory tree
2019/12/04 14:47:59 DEBUG : Google drive root 'Media/crypt': Checking for changes on remote
2019/12/04 14:48:00 DEBUG : : Reading directory tree done in 8.142485578s
2019/12/04 14:48:00 DEBUG : rc: "vfs/refresh": reply map[result:map[:OK]]: <nil>

I will test disabling the long timeouts now and report back ASAP

Seems to be just part of the log. Can you share the whole log?

You should also update to the latest version so we can test on the same version.

Here's the log:

I will update to the same version, test the same mount command, then if it persists, remove the timeouts and try again

I'd be curious on the new version.

I can see the rename in the log and that looks right as well pointing to the new location.

I can't reproduce the nesting issue on rclone v1.50.2 on my setup anyway.

Interesting results:

abhishek@dragon:/mnt/gdrive/TV Shows/test/test2$ ls -al
total 0
abhishek@dragon:/mnt/gdrive/TV Shows/test/test2$ less test3.txt

So it's the directory listings missing, but the file is showing up.

abhishek@dragon:/mnt/gdrive/TV Shows/test/test2$ rclone version
rclone v1.50.2
 - os/arch: linux/amd64
 - go version: go1.13.4


And, as expected, removing the dir-cache-time and attr-timeout solves it

The default for dir-cache-time is 5 minutes, did you wait 5 minutes or did it just work?

I'd think it's probably the attr timeout.

It just worked.