New feature: CACHE

Yes, when that happened the crypt_gdrive remote - connected to cache (cache_gdrive) - was mounted and I tried to move files over to the crypt_gdrive remote.
So, this is a “normal” reaction?

What would be the workaround to upload files (encrypted) files without having to unmount?

1 Like

Yes.

You could as you suggested make a new mount without the cache so crypt -> gdrive and use that.

Alternatively you could just move files directly into the mount.

You could as you suggested make a new mount without the cache so crypt -> gdrive and use that.

Tried that and it uploads but in my encrypted mount which goes through the cache it doesn’t show up (at least until now). I assume the mount will be updated as soon as it passes the cache info age expiration, right?

Alternatively you could just move files directly into the mount.

That doesn’t really work for me, actually it never did.
I am using WinSCP and whenever I try to move anything to the mount I get following error:

General failure (server should provide error description).
Error code: 4
Error message from server: Failure
Common reasons for the Error code 4 are:

  • Renaming a file to a name of already existing file.
  • Creating a directory that already exists.
  • Moving a remote file to a different filesystem (HDD).
  • Uploading a file to a full filesystem (HDD).
  • Exceeding a user disk quota.

Never found a solution to this issue. Do you have a suggestion?

Btw… Just looked into the logfile of my mount and saw that this is showing up from time to time.

2018/01/20 11:05:07 DEBUG : : Statfs:
2018/01/20 11:05:07 DEBUG : : >Statfs: stat={Blocks:274877906944 Bfree:274877906944 Bavail:274877906944 Files:1000000000 Ffree:1000000000 Bsize:4096 Namelen:255 Frsize:4096}, err=
2018/01/20 11:05:07 DEBUG : : Statfs:
2018/01/20 11:05:07 DEBUG : : >Statfs: stat={Blocks:274877906944 Bfree:274877906944 Bavail:274877906944 Files:1000000000 Ffree:1000000000 Bsize:4096 Namelen:255 Frsize:4096}, err=
2018/01/20 11:13:24 DEBUG : /: Attr:
2018/01/20 11:13:24 DEBUG : /: >Attr: attr=valid=1m0s ino=0 size=0 mode=drwxr-xr-x, err=

Is this anything to worry about?

Yes you’ll have to either wait for the expiry or refresh the cache.

Adding --vfs-cache-mode writes to the mount will make it more “compatible” - you could try that.

That looks normal, just your OS polling the FUSE mount to see if it is still alive I expect.

That does not improve compatibility in my case.
Tried it with these flags:

  1. cache-writes
  2. vfs-cache mode writes
  3. cache writes & vfs-cache mode writes

Everytime same error.

It has been 24h now and my cache mount still doesn’t show up the new files. File age is set to 6h, so it already should have updated the cache, right? How could I refresh the cache manually?

Remounting does not help as well.

An excerpt from my log:

2018/01/21 10:49:52 DEBUG : cache_gdrive: wrapped gdrive: at root
2018/01/21 10:49:52 INFO : cache_gdrive: Cache DB path: /root/.cache/rclone/cache-backend/cache_gdrive.db
2018/01/21 10:49:52 INFO : cache_gdrive: Cache chunk path: /root/.cache/rclone/cache-backend/cache_gdrive
2018/01/21 10:49:52 INFO : cache_gdrive: Chunk Memory: true
2018/01/21 10:49:52 INFO : cache_gdrive: Chunk Size: 5M
2018/01/21 10:49:52 INFO : cache_gdrive: Chunk Total Size: 1G
2018/01/21 10:49:52 INFO : cache_gdrive: Chunk Clean Interval: 1m0s
2018/01/21 10:49:52 INFO : cache_gdrive: Workers: 4
2018/01/21 10:49:52 INFO : cache_gdrive: File Age: 6h0m0s
2018/01/21 10:49:52 INFO : cache_gdrive: Cache Writes: true
2018/01/21 10:49:52 INFO : Encrypted drive ‘crypt_gdrive:’: Modify window is 1ms
2018/01/21 10:49:52 DEBUG : Encrypted drive ‘crypt_gdrive:’: Mounting on “/media/cry”
2018/01/21 10:49:52 INFO : cache: deleted (0) chunks
2018/01/21 10:49:52 NOTICE: Encrypted drive ‘crypt_gdrive:’: poll-interval is not supported by this remote

I wonder why it says " poll-interval is not supported by this remote"? Is that normal?

:frowning:

There should be a corresponding error in the mount log file - can you post that? That will help debugging. This is probably to do with file system incompatibilities with Windows rather than anything to do with the cache.

Hmm not sure what is happening there. You can always use --cache-db-purge to purge the entire cache metadata.

Yes that is normal - not all remotes support all features.

This is what the debug shows:

2018/01/22 13:56:48 DEBUG : upload/: Lookup: name=“log.txt”
2018/01/22 13:56:48 DEBUG : upload/: >Lookup: node=, err=no such file or directory
2018/01/22 13:56:48 DEBUG : upload/: Lookup: name=“log.txt”
2018/01/22 13:56:48 DEBUG : upload/: >Lookup: node=, err=no such file or directory
2018/01/22 13:58:52 DEBUG : /: Lookup: name=“log.txt”
2018/01/22 13:58:52 DEBUG : /: >Lookup: node=, err=no such file or directory
2018/01/22 13:58:52 DEBUG : /: Lookup: name=“log.txt”
2018/01/22 13:58:52 DEBUG : /: >Lookup: node=, err=no such file or directory

I don’t know why it says there “no such file or directory”. Actually it should be moving the file log.txt to the folder upload in the remote in the first try and in the second try it should have moved the file to the root of the remote.

Regarding the second isseu: Purging the cache did actually solve the problem. :slight_smile:

Can you reproduce the problem for me with a sequence of commands in a .bat script? If you can do that then please make a new issue on github with it in and we can have a go at fixing it.

Thanks!

Any commands I can try to speed up start times? Feels like 2-3 times slower than plexdrive.

1 Like

I will have a look at it and report back. Will probably take me till next weekend though.

seems to be normal with cache.
See Rclone 1.39 cache mount vs. plexdrive

Would be nice to speed up.

Would it be possible to add API polling results to the regular -v log? It’d be nice to see timestamps of when it’s picking up on changes on the remote for debugging. It’ll be more difficult with the encrypted paths but unless there’s a way to wrap crypt decoding around the log it’ll do!

As an aside, rclone v1.39-259-g4924ac2fβ isn’t showing any polling activity, so new files never show, this behaviour was present in v210 too.

1.4 is almost freezed but I will gladly add that with a new beta after the release @spicypixel try to open an issue in GIT to not forget about it please

Sure, done: https://github.com/ncw/rclone/issues/2150

Am I to assume your internal testing shows google drive changes are being polled correctly? If it’s a known bug hope it’s not too difficult to figure out the regression to how it was working in v175.

Yep. And my own personal mount should use it too.
Isn’t it for you? It isn’t obvious from this thread that it would be an issue with that functionality. What are your symptoms or not working?

1 Like

Added a file from the webui to google drive.

Log claims it’s been detected and purges the information from the cache, running ls on the cache mount repeatedly via watch for an hour - no file shows.

Seems to indicate below the changes are detected, and timestamp wise it was within a minute of the upload completing, it just never shows on the filesystem. I have dir-cache-time=1m to try and rule that out.

File is placed at the root of the mount.

rclone v1.39-270-g38d9475aβ

  • os/arch: linux/amd64
  • go version: go1.10
2018/03/17 22:10:13 DEBUG : Cache remote media-cached:: ignoring change notification for non cached entry co65k72tnie4tivg55un33iv2nfd82uf0fl91lrb4fsqkfbp5as0
2018/03/17 22:10:13 DEBUG : Google drive root 'Media': All changes were processed. Waiting for more.
2018/03/17 22:10:14 DEBUG : Google drive root 'Media': Checking for changes on remote
2018/03/17 22:10:14 DEBUG : : Statfs: 
2018/03/17 22:10:14 DEBUG : : >Statfs: stat={Blocks:274877906944 Bfree:274877906944 Bavail:274877906944 Files:1000000000 Ffree:1000000000 Bsize:4096 Namelen:255 Frsize:4096}, err=<nil>
2018/03/17 22:10:14 DEBUG : Cache remote media-cached:: notify: expiring cache for 'co65k72tnie4tivg55un33iv2nfd82uf0fl91lrb4fsqkfbp5as0'
2018/03/17 22:10:14 DEBUG : Backups: forgetting directory cache
2018/03/17 22:10:14 DEBUG : import: forgetting directory cache
2018/03/17 22:10:14 DEBUG : ISOs: forgetting directory cache
2018/03/17 22:10:14 DEBUG : Movies: forgetting directory cache
2018/03/17 22:10:14 DEBUG : System Volume Information: forgetting directory cache
2018/03/17 22:10:14 DEBUG : TV: forgetting directory cache
2018/03/17 22:10:14 DEBUG : 4K: forgetting directory cache
2018/03/17 22:10:14 DEBUG : Fitness: forgetting directory cache
2018/03/17 22:10:14 DEBUG : : forgetting directory cache
2018/03/17 22:10:14 DEBUG : Cache remote media-cached:: ignoring change notification for non cached entry co65k72tnie4tivg55un33iv2nfd82uf0fl91lrb4fsqkfbp5as0
2018/03/17 22:10:14 DEBUG : Google drive root 'Media': All changes were processed. Waiting for more.```

Remus, awesome job on this backend. It was much needed for Google Drive plex users.

One question:

With Offline Uploading enabled, if we clear the cache via RC (I.E. rclone rc cache/expire remote=/) will this clear the upload queue as well?

I see that --purge-db-cache does (which seems dangerous if there is say 100GB+ of files that now have their names obfuscated in the temp folder).

What is a best practice here for someone that might bring in 50+ GB’s at a time for upload.