Failed to create filesystem

Hi all,

I'm having the following issue with rclone sync SFTP:/foo/ S3:/bar/foo/. If S3:/bar/foo/ doesn't exist, all works fine. The 2nd run fails with the error below. I'm using the rclone/rclone:latest docker image for this.
Any ideas?

type = sftp
host =
user = user
pass = hashedpass

type = s3
secret_access_key = secret
region = us-east-1
endpoint =
env_auth = false
location_constraing = us-east-1
acl = private
provider = Minio
access_key_id = access_key

Executing rclone -v --sftp-use-insecure-cipher sync SFTP:/foo/ S3:/bar/foo/
Thu Oct 10 02:05:09 2019 2019/10/09 23:05:09 Failed to create file system for "S3:/bar/foo/": is a file not a directory

What is the result of:

rclone ls S3:/bar/foo/ ?

The error claims this is a file and not a folder - which does seem weird - but what does a manual inspection show?

I suspect the error is due to rclone being confused that you are not inputting the expected format.
A Google Drive is fine with being referenced as MyGdrive:
But bucket based remotes need the name of the bucket as part of the synatax. Something like this:


More simplified/generalized:


(the filename part is omitted if you are referencing a folder rather than a file obviously, which is usually the case when using rclone for typical tasks)

I suspect you've been trying to do


The correct way to reference bucket-based remotes is not super obvious as it's different to other remotes, so I just want to check that the problem is not as simple as that...

EDIT: I see in your config you have
endpoint =
and this may render the above point moot. I just don't know as I haven't used that setting. I don't have access to S3 and my bucket-based remote experience comes from Google Cloud.

Maybe @asdffdsa can help here as he is an S3 user with more direct experience.
(the mention will notify him of the topic)

hey, stop twisting my arm, ok, ok, i will say it,
i agree with @thestigma, :worried:
but i would run this command instead

rclone ls S3:/bar/

and post the output here and we can discuss it.

Well. I've tried all kinds of combinations. With a trailing /, without one, bucketname in the config, bucketname as mentioned as part of the sync and all. For Minio the seems to be the only one that works. Everything else told me directory not found.

If I do

rclone ls S3:/bar/


rclone ls S3:/bar/foo/

It basically returns nothing (with nothing I don't mean an error - it returns exitcode 0, but doesn't show the content). The weird thing here is, that the first execution works if /bar/foo/ is not present, but just /bar/.

your s3 provider is a minio, which i am not familiar with.
so there is no file with the named foo in the sftp?

for your s3 provider, that endpoint might not be correct.
in my limited experience, the endpoint is a domain name, without a bucket.
perhaps try
endpoint =

I've tried all of that.
Again, first run works - I can always get a working run if I delete /bar/foo/. But the 2nd one fails again.

/foo/ in sftp is a directory

Syncing to

/bar/foo/ in Minio

Have you inspected the contents you sync on whatever interface your provider offers - just to check that it reflects what you think you did in rclone? I think that would be worth checking.

I agree with asdffdsa that it seems weird to have an endpoint with a bucket at the end. I've never seen that and an endpoint is usually a contact point for the network you where you access the service, not a place in your storage. However since I've never had hands on experience with this provider I can't say it definitely wrong. It just looks very unusual.

to be clear, you tried
endpoint =
instead of
endpoint =

i have no experience with an endpoint containing a bucket name, never seen that before.

have you looked at
it has an endpoint of
endpoint =

I could not find any examples of people showing their configs for minio in previous forum posts but...

I did at least see that another minio user (who seemed to get a working setup) use the remotename:bucketname formatting in the command

rclone.exe mount Minio:bucket_name X: --s3-upload-cutoff 0 --no-check-certificate

That address could definitely vary obviously, but that does appear to be the correct formatting it expects from the example yea.

Definitely read through those instructions if you haven't already. That access-point seems to be on the local network, so rclone presumable interfaces with the local server here and transmits via that - or something along those lines (again I am very unfamiliar with this setup).

yes, that is a local network, but that is the example given
the main point is that is not a bucket name in the endpoint.
and about the mount command you reference, for sure, there could/should be a bucket name there.

i think that we both agree that having a bucket name in the endpoint is something we have never seen before.

Yes, I've inspected it. It 100% works as expected on the first run. It creates the folder foo/ as a subfolder from in /bar/ and all its contents from SFTP:/foo/ is in S3:/bar/foo/ on Minio. 100% what I want and what I've inspected. Given that the first run is correct, I assume that this is not a config issue but maybe an issue with minio provider itself.

@asdffdsa - Yes, I've tried that. Also rebuilding the image now to check if I can reflect 100% the config from the example (beside server_side_encryption, Minio is behind https in my case)

i am not getting a clear answer to my question
have you tried to have an endpoint WITHOUT a bucket name?

thestigma and myself, both of us have NEVER seen an enpoint with a bucket name,

Yes I've tried it without. I get 100% the same behaviour without the bucketname in the URL. First run works, 2nd one gets the same error.

Referencing what is in the guide - could you perhaps share the stuff under:

When it configures itself Minio will print something like this: ?

Redact your secret keys before you post obviously.

I suppose it is possible that minio have had further support added since the time that documentation was written and it is now possible to interface directly with the cloud-server, but I can't speak to that. It is quite strange that anything would work at all otherwise, but I also wouldn't automatically assume based on that that the configuration is correct either.

there is a website named, is that your endpoint?

if you look at,
it seems clear that the endpoint does not contain a bucket name.
and in each example i have seen, there is a both an http/https in the bucket name and a port number

aws --endpoint-url s3 ls

I can do the same easily without a bucketname in the URL.
The bahaviour is the same. If


Doesn't exist

rclone sync SFTP:/foo/ S3:bucket/bar/foo/

Works. If /bar/foo/ exists, it doesn't work and fails with the error above. No, that's not my endpoint. I can't post the correct endpoint here, same as with the output of Minio. I just can't get in my head that this should be a config issue, given that the first run works. Somehow rclone or Minio seem to recognize


as a file instead of a directory - even though my file browser shows that this is a directory.
A sync happened in the first place. But the whole point of sync is to execute it multiple times and keep the 2 sources in sync.

To add here:

rclone ls S3:bucket/


rclone ls S3:bucket/foo/

Gives no output

I understand your point of view - but if you aren't at least willing to try setting up a second test-remote (you wouldn't need to remove your first one) that follows the pattern in the documentation then I'm just not sure how much more we (or at least I) can help you.

First point of any troubleshooting is typically to make sure the setup is correct. If we can't verify this then everything after that point just becomes speculation as I have no idea how rclone would react to it.

Well, this is a production setup at scale that is in use since around 6 months. What exactly are you referring to? The port 9000? The loadbalancer maps port 443 with SSL offloading to 9000. The bucket name? I've removed that from the endpoint and put that in the command - what did I miss?