I just started using rclone recently and I am amazed by the work of the creators & the community, it is an awesome piece of software!
The only thing I'm missing and came up with an ugly workaround hack for is serving a remote using S3. I saw this issue came up way back here but was closed without further comments.
My simple solution is to mount a filesystem using fuse, then start a minio instance using the fuse mountpoint as the data path.
I think this functionality could be relatively easily ported natively into rclone.
And while I know many things "could be done", the reason I think this also should be done is that S3 is one of the most widely used object store protocols used by many software in many different areas.
The goal I am (and I'm pretty sure many others are) after is to serve e.g. a OneDrive backend and act as an S3 compatible server.
So the S3 client could connect to the rclone server, create buckets, upload objects etc. where the underlying storage media is a OneDrive (or any other rclone supported) backend.
Unfortunately my links were broken in my original post, I try to embed them again:
There should be just some sanity checks, e.g. when issuing the command:
rclone serve s3 remote:/path/to/s3/root
We should apply some sanity checks whether there are only folders in /path/to/s3/root (there can only be buckets at the root) and whether there are only files inside those buckets (there is no nested structure in S3). If these sanity checks pass, the S3 server can be started, which implementation is excellently done by the minio team.
I think this is a feature I really would like to have for rclone.
What I'd really like though is for someone else to make a go library I could import into rclone and use as the S3 protocol is quite tricky!
There are two possible ideas on that page. I don't think minio has moved to a library format though and is probably unlikely to so that leaves the gofakes library. It might be fun to build up a test with that library...