Testing for new vfs cache mode features

There is a debug log at this point. It could be an INFO log

What I really need is a method of reproduction...

That would be helpful, thanks.

I will see if adding the fuse debug statements reveal anything. The only other option I can think of is to dive into the source of the application that's using the mount and I would like to avoid that for basically a non-issue other than the error log.

1 Like

I made an info level log

fs.Infof(item.name, "vfs cache: queuing for upload in %v", item.c.opt.WriteBack)

I also tidied the logging and removed a lot of debugs!

v1.52.2-203-gc1780f8d-vfs-beta on branch vfs (uploaded in 15-30 mins)

Works perfectly. Thanks.

You can probably go ahead and merge the fixes to the master. I will just create a separate thread for the chtimes issue once I have more logs since that didn't get introduced with these changes anyway.

1 Like

I have done that! All fixes now on their way to the beta

Great

1 Like

@ncw How well have you tested rename scenarios with this branch?

We have run in to an issue where data goes missing when running rsync transfers to a rclone mount using a temp directory:

rsync -e "ssh -p 8022 -i /var/lib/rsync/.ssh/id_rsa -T -o Compression=no -o StrictHostKeyChecking=no -x" --verbose --recursive --human-readable --numeric-ids --progress --whole-file --no-compress --remove-source-files --temp-dir="../.tmp" /tmp/456 "rsync@${RSYNC_DST}:/mnt/rclone" 

With rclone mount options:

--allow-other --dir-cache-time 8760h --drive-use-trash --drive-skip-gdocs --fast-list --uid 65534 --gid 100 --umask 007 --vfs-cache-mode minimal --vfs-cache-poll-interval 1m --vfs-read-chunk-size 32M --vfs-cache-max-age 8760h --vfs-cache-max-size 4G

This is with:

v1.52.1-145-g4d9ad98a-vfs-beta

The files appear in the rclone mount at the final rsync destination, but they do not exist on the remote. The vfs_cache directory shows files in the ".tmp" folder (not the destination folder where they show up in the mount).

I'm working on updating to the latest and getting debug logs to confirm.

Confirmed. When transferring to an rclone mount (using VFS CACHE) with the rsync --temp-dir option (which issues fs renames when files are complete), the files actually don't make it to the remote, but are presented via the mounted dir.

This ultimately creates data loss for us when an rclone mount gets restarted.

Thanks for making an issue. I'll look at it and respond there :slight_smile:

Hello,

I'm not sure to understand, but are the latest beta including the new vfs code ? Can we safely dl the lastest regular beta to benefit from the latest commit on the new vfs cache feature, or we should still use the "vfs branch" ?

Thx.

It's in the beta branch

Guys, I found a "problem" here.
Using vfs-cache-mode full, is the permission to create files in the directory really 700?
I have the cache pointed to /mnt/vfs and applied permission 755 recursively, but when rclone creates a new file, it ignores my permissions and creates everything like 700.
This disturbs me a little, because I use zabbix to collect the size of this folder and he can't read anything because he doesn't have permission.

Is it possible to change the rclone permission to any other option (775 or 777) when creating the file in the cache folder?

My current montage is this:

--allow-other
--poll-interval 1m
--drive-skip-gdocs
--allow-non-empty
--gid=1000
--uid=1000
--umask 002
--fast-list
--tpslimit 10
--cache-dir=/mnt/vfs
--vfs-cache-mode full
--vfs-cache-max-size=300G
--vfs-cache-max-age=720h
--vfs-read-chunk-size 32M
--drive-chunk-size 16M
--vfs-read-chunk-size-limit 1G
--log-file /opt/rclone-beta.log
--buffer-size=16M
--use-mmap
--log-level NOTICE

Yes it is

$ git grep OpenFile vfs/vfscache/
vfs/vfscache/item.go:           fd, err = file.OpenFile(osPath, os.O_CREATE|os.O_WRONLY, 0600)
vfs/vfscache/item.go:   fd, err := file.OpenFile(osPath, os.O_RDWR, 0600)

$ git grep Mkdir vfs/vfscache/
vfs/vfscache/cache.go:  err := os.MkdirAll(parentPath, 0700)
vfs/vfscache/cache.go:  err = os.MkdirAll(parentPathMeta, 0700)
vfs/vfscache/cache.go:          err = os.MkdirAll(parent, 0700)
vfs/vfscache/cache_test.go:func TestCacheOpenMkdir(t *testing.T) {

That would need a source code change. Which wouldn't be hard. I'm not sure it is the right thing to do though. Maybe u+rw g+r so 0640 and 0750 would that be enough?

Those are for the local files created? Having 700 would break many things as it should respect the umask on the user running it.

Plex normally runs as a plex user and rclone as a different one so that's a must fix for most installations.

1 Like

I believe that if the user and the group already had access to at least reading, it would already help a lot.
In this case, I had to activate AllowRoot in the zabbix config to be able to collect information from folders and files.
Another way I would have to resort is to create a crontab to run chmod every x time, which would not be interesting.

I believe that changing the cache creation to 755 already solves most of the future problems.

That causes problems with most plex setups as it just should just respect the umask setting that it's launched with. It should not be hard coded to anything.

These are just for the files in the cache. When they are read on the mount they will have the usual permissions which are 777 or 666 modified by the umask.

My thinking is that files in the cache are private and shouldn't be modified by other users but group reading is probably OK.

I'm good with that as from the mount perspective, but on shared systems, setting to 755 would allow access to things maybe unintended? I tend to err on the side of restrictive and allow people to open up if they choose so maybe 700 by default and allow it to be modified by choice.

I'm starting to test this on my main mount since this does seem in a pretty good spot in terms of stability as my 1TB cache disk came from Amazon yesterday evening :slight_smile:

@ncw - are you trying to release this with 1.53?

My mount is primarily read only with only a delete happening if media is updated and testing out things, it seems to be good. Have to let it run for a few days and see how things progress.

Great :smiley:

That is my plan unless we find something dreadful. I may push out a 1.52.3 before then - haven't decided yet.

Let me know what happens!

I'll have to spend a little time, but scanning a show seems to take insanely long as it's 45-60 seconds per TV episode that's normally 5-10 seconds maybe.

I'm pretty default mount at this point:

felix        825       1 30 07:35 ?        00:18:25 /usr/bin/rclone mount gcrypt: /GD --allow-other --dir-cache-time 1000h --log-level INFO --log-file /opt/rclone/logs/rclone.log --poll-interval 15s --umask 002 --user-agent animosityapp --rc --rc-addr :5572 --cache-dir=/cache --vfs-cache-mode full --vfs-cache-max-size 500G --vfs-cache-max-age 336h

I'll see if I can test later with some debug to see what it is doing. Normally Plex opens a file 3 times and runs a mediainfo against the file to get the container info.