What is the problem you are having with rclone?
When using rclone mount to mount an AWS S3 bucket in MS Windows 2019, the first Windows user to mount the bucket succeeds. Subsequent user(s) fail with an error message that, to me, is non-obvious.
Subsequent users can mount other buckets. The problem only happens when a second user logs into the same Windows box and tries to mount the same S3 bucket that the first user has already mounted. I am not using the WinFsp.Launcher infrastructure, so the first user's mount of the bucket is not visible to the second user. Having one of the users remove the --network-mode flag does not change the results. Changing the drive letter does not change the results.
Perhaps I missed something in the documentation telling me what I did wrong with my original setup. If you see what I did wrong other than my workaround, please tell me. Otherwise this post is because my searching did not find any mention of it, so I wanted to document the workaround I discovered. Hopefully the documentation and/or code will be changed so future users won't have to search forum posts.
To have the second user's mount succeed, just change the text after the --volname flag to be unique. So, apparently, whatever namespace used by the --volname flag is scoped to the entire computer, instead of per user. I do not know if this is something within rclone or if it is part of WinFsp or part of MS Windows.
What is your rclone version (output from rclone version
)
rclone v1.55.1
- os/type: windows
- os/arch: amd64
- go/version: go1.16.3
- go/linking: dynamic
- go/tags: cmount
I have had this happen with WinFsp 1.8.20304 and 1.9.21096.
Which OS you are using and how many bits
Windows Server 2019 64-bit on an AWS EC2 instance using an IAM role that allows S3 access.
Which cloud storage system are you using?
Amazon AWS S3 accessed through an AWS EC2 instance with an assigned IAM role that allows S3 access.
The command you were trying to run
rclone mount s3:my-bucket-name s: --network-mode --volname \\s3\my-bucket-name --buffer-size 1048576 --vfs-cache-mode full --cache-dir C:\rclone-caches\bucket-name-user-name --vfs-cache-max-size 104857600 --no-modtime
Please note that in the command above, one user (the standard built-in Administrator account) used a --cache-dir of C:\rclone-caches\my-bucket-name-administrator and another user (Bob, who was granted Administrator rights) used a --cache-dir of C:\rclone-caches\my-bucket-name-bob. If the documentation specifies whether or not cache directories can be shared among multiple mounts, I did not properly understand if and when it was possible, so I made sure that they were never shared. However, I must admit that I did not pay close attention to that portion of the docs because empty directories are free
The rclone config contents with secrets removed.
[s3]
type = s3
provider = AWS
env_auth = true
region = <redacted>
location_constraint = <redactedButExactlyMatchesRegionValue>
acl = bucket-owner-full-control
A log from the command with the -vv
flag
2021/05/12 18:49:15 DEBUG : Using config file from "C:\\Users\\bob\\.config\\rclone\\rclone.conf"
2021/05/12 18:49:15 DEBUG : rclone: Version "v1.55.1" starting with parameters ["rclone" "mount" "s3:my-bucket-name" "s:" "--network-mode" "--volname" "\\\\s3\\my-bucket-name" "--buffer-size" "1048576" "--vfs-cache-mode" "full" "--cache-dir" "C:\\rclone-caches\\my-bucket-name-bob" "--vfs-cache-max-size" "104857600" "--no-modtime" "-vv"]
2021/05/12 18:49:15 DEBUG : Creating backend with remote "s3:my-bucket-name"
2021/05/12 18:49:15 INFO : S3 bucket my-bucket-name: poll-interval is not supported by this remote
2021/05/12 18:49:15 DEBUG : vfs cache: root is "\\\\?\\C:\\rclone-caches\\my-bucket-name-bob\\vfs\\s3\\my-bucket-name"
2021/05/12 18:49:15 DEBUG : vfs cache: metadata root is "\\\\?\\C:\\rclone-caches\\my-bucket-name-bob\\vfs\\s3\\my-bucket-name"
2021/05/12 18:49:15 DEBUG : Creating backend with remote "\\\\?\\C:\\rclone-caches\\my-bucket-name-bob\\vfs\\s3\\my-bucket-name"
2021/05/12 18:49:15 DEBUG : fs cache: renaming cache item "\\\\?\\C:\\rclone-caches\\my-bucket-name-bob\\vfs\\s3\\my-bucket-name" to be canonical "//?/C:/rclone-caches/my-bucket-name-bob/vfs/s3/my-bucket-name"
2021/05/12 18:49:15 DEBUG : fs cache: switching user supplied name "\\\\?\\C:\\rclone-caches\\my-bucket-name-bob\\vfs\\s3\\my-bucket-name" for canonical name "//?/C:/rclone-caches/my-bucket-name-bob/vfs/s3/my-bucket-name"
2021/05/12 18:49:15 DEBUG : Network mode mounting is enabled
2021/05/12 18:49:15 DEBUG : Mounting on "s:" ("\\s3\\my-bucket-name")
2021/05/12 18:49:15 INFO : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2021/05/12 18:49:15 DEBUG : S3 bucket my-bucket-name: Mounting with options: ["-o" "attr_timeout=1" "-o" "uid=-1" "-o" "gid=-1" "--FileSystemName=rclone" "--VolumePrefix=\\s3\\my-bucket-name"]
2021/05/12 18:49:15 DEBUG : S3 bucket my-bucket-name: Init:
2021/05/12 18:49:15 DEBUG : S3 bucket my-bucket-name: >Init:
2021/05/12 18:49:15 DEBUG : /: Statfs:
2021/05/12 18:49:15 DEBUG : /: >Statfs: stat={Bsize:4096 Frsize:4096 Blocks:274877906944 Bfree:274877906944 Bavail:274877906944 Files:1000000000 Ffree:1000000000 Favail:0 Fsid:0 Flag:0 Namemax:255}, errc=0
2021/05/12 18:49:15 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2021/05/12 18:49:15 DEBUG : /: >Getattr: errc=0
2021/05/12 18:49:15 DEBUG : /: Readlink:
2021/05/12 18:49:15 DEBUG : /: >Readlink: linkPath="", errc=-40
Cannot create WinFsp-FUSE file system.
2021/05/12 18:49:15 DEBUG : S3 bucket my-bucket-name: Destroy:
2021/05/12 18:49:15 DEBUG : S3 bucket my-bucket-name: >Destroy:
The service rclone has failed to start (Status=80070050).
2021/05/12 18:49:15 ERROR : S3 bucket my-bucket-name: Mount failed
2021/05/12 18:49:32 ERROR : mountpoint "s:" didn't became available on mount - continuing anyway
2021/05/12 18:49:32 DEBUG : Not calling host.Unmount as mount already Destroyed
2021/05/12 18:49:32 DEBUG : Unmounted successfully
2021/05/12 18:49:32 DEBUG : vfs cache: cleaner exiting
2021/05/12 18:49:32 Fatal error: failed to umount FUSE fs: mount failed