Union with vfs write cache ignores uid/gid/umask/perms

What is the problem you are having with rclone?

an rclone union mount between a gdrive mount (to utilize vfs full for the remote) and a local fs, using --uid --gid --umask --file-perms --dir-perms, correctly displays all of that in the fuse mount, but writes back to the local fs as root with 777 perms. is this expected behavior?

Run the command 'rclone version' and share the full output of the command.

rclone v1.62.0-beta.6696.a6f6a9dcd

  • os/version: unknown
  • os/kernel: 4.4.180+ (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.20rc3
  • go/linking: static
  • go/tags: none

(this also happened on 1.60)

Which cloud storage system are you using? (eg Google Drive)

google drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

gdrive mount:

sudo /usr/bin/rclone mount gsuite-crypt: /volume3/SMB/gsuite_crypt --config /var/services/homes/vena/.config/rclone/rclone.conf --uid 1026 --gid 100 --umask 2 --file-perms 664 --dir-perms 774 --allow-other --temp-dir /volume3/rclone//tmp --log-file /var/log/rclone/rc.log --log-level NOTICE --copy-links --timeout 1h --max-backlog 2000000 --stats-file-name-length 0 --filter-from /var/services/homes/vena/rclone-filter-mounts.txt --union-action-policy all --union-create-policy epff --user-agent rcvideo/v1.0 --cache-dir /volume3/rclone/ --cache-chunk-total-size 8G --cache-chunk-size 8M --cache-workers 12 --drive-stop-on-upload-limit --drive-chunk-size 512M --drive-pacer-min-sleep 20ms --drive-pacer-burst 200 --drive-v2-download-min-size 100M --attr-timeout 1m --daemon --log-file /var/log/rclone/rc-mounts.log --log-level DEBUG --vfs-cache-mode full --vfs-cache-max-size 78G --vfs-cache-max-age 43200s --vfs-read-chunk-size 10M --vfs-read-chunk-size-limit 600M --vfs-write-back 1m --vfs-read-ahead 1G --vfs-cache-poll-interval 10m

union mount:

sudo /usr/bin/rclone mount video-union2: /volume3/SMB/union --config /var/services/homes/vena/.config/rclone/rclone.conf --uid 1026 --gid 100 --umask 2 --file-perms 664 --dir-perms 774 --allow-other --temp-dir /volume3/rclone/tmp --log-file /var/log/rclon/rc.log --log-level NOTICE --copy-links --timeout 1h --max-backlog 2000000 --stats-file-name-length 0 --filter-from /var/services/homes/vena/rclone-filter-mounts.txt --union-action-policy all --union-create-policy epff --user-agent rcvideo/v1.0 --cache-dir /volume3/rclone/ --cache-chunk-total-size 8G --cache-chunk-size 8M --cache-workers 12 --drive-stop-on-upload-limit --drive-chunk-size 512M --drive-pacer-min-sleep 20ms --drive-pacer-burst 200 --drive-v2-download-min-size 100M --attr-timeout 1m --daemon --vfs-cache-mode writes --vfs-cache-max-size 10G

issue:

$ touch /volume3/SMB/union/test
$ ls -al /volume3/SMB/union/
total 0
drwxrwxr--  1 vena users  0 Jan 20 22:39 .
drwxrwxrwx+ 1 root root  72 Nov 15 14:35 ..
[...]
-rw-rw-r--  1 vena users  0 Jan 20 22:44 test

[...wait a few seconds]
$ ls -al /volume2/Videos/
total 0
drwxrwxrwx+ 1 vena users   116 Jan 20 22:45 .
drwxr-xr-x  1 root root    190 Jan 20 18:40 ..
[...]
-rwxrwxrwx+ 1 root root      0 Jan 20 22:44 test

The rclone config contents with secrets removed.

[gsuite]
type = drive
client_id = 
client_secret = 
scope = drive.file
root_folder_id = 
token = 
team_drive = 
chunk_size = 8M
transfers = 2
checkers = 4
tpslimit = 10
bwlimit = off:off
pacer_burst = 200
pacer_min_sleep = 20ms
chunk_total_size = 8G
chunk_no_memory = false
workers = 12

[gsuite-crypt]
type = crypt
remote = gsuite:/crypt
password = 
password2 = 
bwlimit = off:off
pacer_burst = 200
pacer_min_sleep = 20ms
chunk_total_size = 8G
chunk_size = 8M
chunk_no_memory = false
workers = 12

[video-union2]
type = union
upstreams = /volume2/Videos /volume3/SMB/gsuite_crypt/volume2/Videos
create_policy = epff
chunk_total_size = 8G
chunk_size = 8M
chunk_no_memory = false
workers = 12
action_policy = epff
search_policy = epff

A log from the command with the -vv flag

https://gist.githubusercontent.com/vena/d9b6af0d1d6f047fbb8c7a3bbee4ba4f/raw/98c91186ddef9a66c0420ed6e191e08ec988ed1d/rclone-mount.log

You have an odd rclone.conf as you have a bunch of cache backend stuff mixed in with your drive remotes. Did you manually edit that/add those lines?

That's just a local disk?

That cache and drive tuning stuff in the conf is mostly overridden by the mount switches and not used. And yes that is the first upstream in the union, purely local. The second upstream in the union is a local mount of the gsuite-crypt definition.

So I'd imagine the issue that you are using a local file system and generally cloud remotes have no umask concept.

It's easy to test as rclone is really just setting the umask visually as cloud remotes have no concept.

rclone mount /home/felix/blah /home/felix/test --umask 222 -vvv

And my user umask is:

[felix@gemini test]$ umask
0002

So if I make a file on that mount, it will look like the umask you've set on the mount as it's a visual thing.

[felix@gemini test]$ touch testing
[felix@gemini test]$ ls -al /home/felix/test
total 5
dr--r----x   1 felix felix    0 Jan 21 10:57 .
drwx------. 11 felix felix 4096 Jan 21 10:57 ..
-r--r-----   1 felix felix  601 Jan 21 10:57 hosts
-r--r-----   1 felix felix    0 Jan 21 10:58 testing

but since my actual user which is creating the file is an OS thing so it'll use my user's umask.

[felix@gemini test]$ ls -al /home/felix/blah/testing
-rw-rw-r-- 1 felix felix 0 Jan 21 10:58 /home/felix/blah/testing

It's going to use the umask 002.

Hope that helps as the union doesn't really impact it.

It's not as they do nothing in your config as you aren't using the cache backend so you should just remove them.

that seems strange, the create policy has only the purely local upstream getting the write, it’s naturally the first to respond. the only way new files get to the drive upstream is with a manual sync.

# [create file in the union mount]
$ touch /volume3/SMB/union/test2

# [check for test2 on the mount of the remote]
$ ls -al /volume3/SMB/gsuite_crypt/volume2/Videos/
total 0
drwxrwxr-- 1 vena users 0 Jan 20  2022 .
drwxrwxr-- 1 vena users 0 Jan 10  2022 ..
-rw-rw-r-- 1 vena users 0 Oct 27 15:59 .anchor-remote

# [check for file on purely local fs]
vena@forest:~$ ls -al /volume2/Videos/
total 0
drwxrwxrwx+ 1 vena users   118 Jan 21 11:07 .
drwxr-xr-x  1 root root    190 Jan 20 18:40 ..
-rw-rw-r--  1 vena users     0 Feb  7  2022 .anchor-local
[…]
-rwxrwxrwx+ 1 root root      0 Jan 21 11:07 test2

since the union is a mount of a local fs and a local mount of a remote also using uid/gid switches, not the actual remote, i’d think rclone always sees owners and permissions in the two union upstreams… it writes as the mounting user, root, to the vfs cache but that doesn’t seem too unusual. when the vfs writes back to persistent storage though, the mount flags for uid/gid/etc are lost.

The mount commands for umask/UID/GID are visual things and nothing more.

The umask/GID/UID of the user is the one that is controlling what is being written on the local file system.

Rclone is designed for cloud remotes and cloud remotes have no concept of those Linux/Unix like things so when you mount, rclone gives you the option to visually alter them/change them as they don't exist in cloud remotes.

Ignore the union and just test on a locally mounted file system as I shared that output to show. you the detail.

wow so yeah, i'm way off here... somehow i got the impression the flags would affect writes, at least with local upstreams in a union.

maybe i need to think about using something else for the union of rclone's drive mount and the local fs, but in the meantime rethinking this scheme so the mounts are done with the user i want creates to end up as solves the problem. thanks!

1 Like

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.