Trying to understand the difference between rclone mount vs s3fs in terms of stability


Let me thank you in advance for any help you can give me. I have (at least I believe I have) search extensively for a comparison between rclone mount and s3fs, and have come up short.

My question is what is the difference between using rclone mount vs using s3fs-fuse for an Ubuntu 20.04 LTS Linux VPS. I've been using s3fs for quite a few months, but from time to time (and without notice, nor entries in syslog) the mount would disappear and would require my intervention to unmount and remount.

The S3 bucket is a Wasabi S3 bucket with 15TB of both large and small sized files (the larger/large files are multimedia files), a total of 727,394 objects

The way I am mounting the volume is as follows:
s3fs [bucket name] /mnt/[mountpoint] -o passwd_file=[path]/[to]/passwd-s3fs -o allow_other -o use_path_request_style -o url=

I have read that rclone mount can be faster, which would be nice, but stability is the #1 need. What do I gain/lose from each of the options? The mount is used as a general purpose mount. The amount of data is too much to buy a block volume!

Again, thank you so much. I hope my question will help others.

i know nothing about s3fs but rclone is very stable, as can be seen by the small number of posts with rclone mount issues.
and almost all of those have nothing to do with stability, instead about newbies getting confused with the vfs cache.

the vfs cache is a major, optional, feature of rclone,
--- chunked reading, that rclone only downloads the parts of a file, that is requested by an app, such as plex.
--- rclone caches those chunks of downloaded data.
so if you need to access it again, rclone will not have to reach out to wasabi to re-download it.

rclone can encrypt s3 data, at rest, in at least three ways.
--- rclone crypt remote, which is client side encryption, encrypts both data and dir/file names.
--- aws s3 client side encryption, encrypts data but NOT dir/file names.
--- aws s3 server-side encryption, tho wasabi does not support that.

for uploading files using the mount,
--- there are many s3 flags to optimize the transfers.
--- if a chunk fails to upload, rclone will re-upload just that chunk, not the entire file
--- verify the file transfer using md5 hash checksum.

rclone supports over three dozen commands, supports over a dozen backends and hundreds of providers.
so learn rclone once and use it again and again.

Thanks! This is great info. But what I really need to understand is the differences. I have every confidence that rclone mount is very stable, however, I am trying to also understand whether I lose any functionality, etc. with rclone mount vs s3fs.

well, you also asked about what you gain.
i have listed a set of features that rclone and rclone mount has, as related to s3.
does s3f3 have any of those features?

I have used s3fs only briefly, but I think it is better at preserving Unix metadata, eg file ownership, permissions etc. If you are just serving a pile of media you won't care about this.

From the s3fs page

large subset of POSIX including reading/writing files, directories, symlinks, mode, uid/gid, and extended attributes

Rclone is very limited here. You get one user and one set of perms with no support for symlinks, extended attributes

compatible with Amazon S3, and other S3-based object stores


allows random writes and appends

With --vfs-cache-mode writes or full

large files via multi-part upload
renames via server-side copy
optional server-side encryption
data integrity via MD5 hashes
in-memory metadata caching
local disk data caching
user-specified regions, including Amazon GovCloud
authenticate via v2 or v4 signatures

Rclone does all of these.

Thank you for your additions. Given that I serve both media and nextcloud data, it seems that rclone, May I interpret your recommendation is to stick with s3fs for now? I had hoped the caching offered by rclone might be worth the change.

Thank you for highlighting the "gains" in rclone.

Why not just test it out and see? It's hard to say things for certain as everyone has a very unique setup/use case.

The amount of things I try and test to see if it works better is what keeps all this stuff fun :slight_smile:

Which owners own the data in your s3fs? If it is all owned by one user then rclone will do fine.

If not, you'll have to work harder. Eg split into two mounts or make a common group.

I have 3 folders:

  1. archive (all owned by me)
  2. media (all owned by me)
  3. nextcloud (all owned by nextcloud)

So it looks like if I split this into 2 buckets and 2 mounts, it might work. Thank you for that. I'm in the process of moving my nextcloud data out of that bucket into another for backup, so I'll give it a try after I have a backup. Thank you. Good info.

Can you point me to the docs on why one versus multi-user. I would like to learn more about this. Thanks again.

Everything in an rclone mount is owned by a single user/group. You can set which user with the --uid flag and which group with the --gid flag.

So you could make everything served by the next cloud user, but add yourself to the next cloud group so you can still read and write the files.

Or you could make a special data user/group (say) which you add both yourself and the next cloud user to.

You'd want to adjust the permissions rclone reports so the files are group writable with --file-perms and --dir-perms.

If you are up to speed with UNIX permissions then this should all make sense hopefully!

Just remember for each mount all files and dies are owned by one user and one group and all the files have the same permissions and all the dies have the same permissions.

For the kind of things people use cloud storage for, this turns out not to be too limiting.

Thank you. It seems like for what I am looking for, it is well worth the effort to try. Luckily, my application of the S3 backend (or specifiically the Wasabi backend), is such that each mount is owned by one user. Thank you again.

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.