(workaround exists) rclone mount for AWS S3 on MS Windows fails for second concurrent user

What is the problem you are having with rclone?

When using rclone mount to mount an AWS S3 bucket in MS Windows 2019, the first Windows user to mount the bucket succeeds. Subsequent user(s) fail with an error message that, to me, is non-obvious.

Subsequent users can mount other buckets. The problem only happens when a second user logs into the same Windows box and tries to mount the same S3 bucket that the first user has already mounted. I am not using the WinFsp.Launcher infrastructure, so the first user's mount of the bucket is not visible to the second user. Having one of the users remove the --network-mode flag does not change the results. Changing the drive letter does not change the results.

Perhaps I missed something in the documentation telling me what I did wrong with my original setup. If you see what I did wrong other than my workaround, please tell me. Otherwise this post is because my searching did not find any mention of it, so I wanted to document the workaround I discovered. Hopefully the documentation and/or code will be changed so future users won't have to search forum posts.

To have the second user's mount succeed, just change the text after the --volname flag to be unique. So, apparently, whatever namespace used by the --volname flag is scoped to the entire computer, instead of per user. I do not know if this is something within rclone or if it is part of WinFsp or part of MS Windows.

What is your rclone version (output from rclone version)

rclone v1.55.1
- os/type: windows
- os/arch: amd64
- go/version: go1.16.3
- go/linking: dynamic
- go/tags: cmount

I have had this happen with WinFsp 1.8.20304 and 1.9.21096.

Which OS you are using and how many bits

Windows Server 2019 64-bit on an AWS EC2 instance using an IAM role that allows S3 access.

Which cloud storage system are you using?

Amazon AWS S3 accessed through an AWS EC2 instance with an assigned IAM role that allows S3 access.

The command you were trying to run

rclone mount s3:my-bucket-name s: --network-mode --volname \\s3\my-bucket-name --buffer-size 1048576 --vfs-cache-mode full --cache-dir C:\rclone-caches\bucket-name-user-name --vfs-cache-max-size 104857600 --no-modtime

Please note that in the command above, one user (the standard built-in Administrator account) used a --cache-dir of C:\rclone-caches\my-bucket-name-administrator and another user (Bob, who was granted Administrator rights) used a --cache-dir of C:\rclone-caches\my-bucket-name-bob. If the documentation specifies whether or not cache directories can be shared among multiple mounts, I did not properly understand if and when it was possible, so I made sure that they were never shared. However, I must admit that I did not pay close attention to that portion of the docs because empty directories are free :slight_smile:

The rclone config contents with secrets removed.

[s3]
type = s3
provider = AWS
env_auth = true
region = <redacted>
location_constraint = <redactedButExactlyMatchesRegionValue>
acl = bucket-owner-full-control

A log from the command with the -vv flag

2021/05/12 18:49:15 DEBUG : Using config file from "C:\\Users\\bob\\.config\\rclone\\rclone.conf"
2021/05/12 18:49:15 DEBUG : rclone: Version "v1.55.1" starting with parameters ["rclone" "mount" "s3:my-bucket-name" "s:" "--network-mode" "--volname" "\\\\s3\\my-bucket-name" "--buffer-size" "1048576" "--vfs-cache-mode" "full" "--cache-dir" "C:\\rclone-caches\\my-bucket-name-bob" "--vfs-cache-max-size" "104857600" "--no-modtime" "-vv"]
2021/05/12 18:49:15 DEBUG : Creating backend with remote "s3:my-bucket-name"
2021/05/12 18:49:15 INFO  : S3 bucket my-bucket-name: poll-interval is not supported by this remote
2021/05/12 18:49:15 DEBUG : vfs cache: root is "\\\\?\\C:\\rclone-caches\\my-bucket-name-bob\\vfs\\s3\\my-bucket-name"
2021/05/12 18:49:15 DEBUG : vfs cache: metadata root is "\\\\?\\C:\\rclone-caches\\my-bucket-name-bob\\vfs\\s3\\my-bucket-name"
2021/05/12 18:49:15 DEBUG : Creating backend with remote "\\\\?\\C:\\rclone-caches\\my-bucket-name-bob\\vfs\\s3\\my-bucket-name"
2021/05/12 18:49:15 DEBUG : fs cache: renaming cache item "\\\\?\\C:\\rclone-caches\\my-bucket-name-bob\\vfs\\s3\\my-bucket-name" to be canonical "//?/C:/rclone-caches/my-bucket-name-bob/vfs/s3/my-bucket-name"
2021/05/12 18:49:15 DEBUG : fs cache: switching user supplied name "\\\\?\\C:\\rclone-caches\\my-bucket-name-bob\\vfs\\s3\\my-bucket-name" for canonical name "//?/C:/rclone-caches/my-bucket-name-bob/vfs/s3/my-bucket-name"
2021/05/12 18:49:15 DEBUG : Network mode mounting is enabled
2021/05/12 18:49:15 DEBUG : Mounting on "s:" ("\\s3\\my-bucket-name")
2021/05/12 18:49:15 INFO  : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2021/05/12 18:49:15 DEBUG : S3 bucket my-bucket-name: Mounting with options: ["-o" "attr_timeout=1" "-o" "uid=-1" "-o" "gid=-1" "--FileSystemName=rclone" "--VolumePrefix=\\s3\\my-bucket-name"]
2021/05/12 18:49:15 DEBUG : S3 bucket my-bucket-name: Init:
2021/05/12 18:49:15 DEBUG : S3 bucket my-bucket-name: >Init:
2021/05/12 18:49:15 DEBUG : /: Statfs:
2021/05/12 18:49:15 DEBUG : /: >Statfs: stat={Bsize:4096 Frsize:4096 Blocks:274877906944 Bfree:274877906944 Bavail:274877906944 Files:1000000000 Ffree:1000000000 Favail:0 Fsid:0 Flag:0 Namemax:255}, errc=0
2021/05/12 18:49:15 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2021/05/12 18:49:15 DEBUG : /: >Getattr: errc=0
2021/05/12 18:49:15 DEBUG : /: Readlink:
2021/05/12 18:49:15 DEBUG : /: >Readlink: linkPath="", errc=-40
Cannot create WinFsp-FUSE file system.
2021/05/12 18:49:15 DEBUG : S3 bucket my-bucket-name: Destroy:
2021/05/12 18:49:15 DEBUG : S3 bucket my-bucket-name: >Destroy:
The service rclone has failed to start (Status=80070050).
2021/05/12 18:49:15 ERROR : S3 bucket my-bucket-name: Mount failed
2021/05/12 18:49:32 ERROR : mountpoint "s:" didn't became available on mount - continuing anyway
2021/05/12 18:49:32 DEBUG : Not calling host.Unmount as mount already Destroyed
2021/05/12 18:49:32 DEBUG : Unmounted successfully
2021/05/12 18:49:32 DEBUG : vfs cache: cleaner exiting
2021/05/12 18:49:32 Fatal error: failed to umount FUSE fs: mount failed

I didn't know that. I can't think of a sensible work around other that to put it in the docs, can you?

I'm not sure the docs do either.

Two rclone mounts mounting different things can safely share the cache, however...

It isn't recommended to share the same cache if you have two rclone mounts pointed at the same thing.

hello and welcome to the forum,

when i use rclone to access a rclone mount or VSS snaphost, i always add a timestamp.
in that way i can spawn as many rclone commands as needed, each not interacting with the others.
and to rclone mount to a folder.

for a given remote,
rclone mount remote: b:\rclone\mount\remote\20210513.0826 --log-level=DEBUG --log-file=C:\data\rclone\logs\remote\20210513.08.26\rclone.log

The only other thing I could think of would be if it is easy (or even possible) to tell if the volname given on the command line is already in use, perhaps based on the return value of some API call or something. Then it would be possible to return an error message telling the user to change the volume name. But if it belongs to another user, security probably prevents you from finding out.

Thank you for the information on the cache directories!

Even if it is possible to change the code, that will probably take longer than adding to the documentation. I don't mind making a first cut at the docs. I looked at the contributing guidelines and it looks like I should just follow the instructions but ignore the part about code. Does that sound right? I don't want to step on anyone's toes.

That's a great idea. Thank you!

Unfortunately the feedback rclone gets from WinFSP is pretty minimal about that.

The easiest way is to go the source code page and then click the pencil icon at the top right. I suggest a sentence about line 264 would be the right place. You can then submit a pull request all in the browser.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.