Remote works but unable to access any file over SFTP

Hi,

What is the problem you are having with rclone?

The remote is working and accessible, files can be accessed and listed, however not when accessing through the exposed SFTP endpoint.

Run the command 'rclone version' and share the full output of the command.

rclone v1.58.1
- os/version: alpine 3.15.4 (64 bit)
- os/kernel: 4.18.0-372.19.1.el8_6.x86_64 (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.18.1
- go/linking: static
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Ceph-s3

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone serve sftp ceph-s3: --user admin --pass ... --addr :2022 -vv --poll-interval 10s

The rclone config contents with secrets removed.

    [ceph-s3]
    type = s3
    provider = Ceph
    endpoint = <ceph s3 endpoint, removed>
    env_auth = false
    region =
    acl =
    location_constraint =
    server_side_encryption =
    storage_class =

A log from the command with the -vv flag

2022/10/10 12:07:36 DEBUG : Setting --config "/etc/rclone.conf" from environment variable RCLONE_CONFIG="/etc/rclone.conf"
2022/10/10 12:07:36 DEBUG : Setting default for s3-access-key-id="..." from environment variable RCLONE_S3_ACCESS_KEY_ID
2022/10/10 12:07:36 DEBUG : Setting default for s3-secret-access-key="..." from environment variable RCLONE_S3_SECRET_ACCESS_KEY
2022/10/10 12:07:36 DEBUG : rclone: Version "v1.58.1" starting with parameters ["rclone" "serve" "sftp" "ceph-s3:" "--user" "admin" "--pass" "..." "--addr" ":2022" "-vv" "--poll-interval" "10s"]
2022/10/10 12:07:36 DEBUG : Creating backend with remote "ceph-s3:"
2022/10/10 12:07:36 DEBUG : Using config file from "/etc/rclone.conf"
2022/10/10 12:07:36 DEBUG : Setting s3_access_key_id="..." from environment variable RCLONE_S3_ACCESS_KEY_ID
2022/10/10 12:07:36 DEBUG : Setting s3_secret_access_key="..." from environment variable RCLONE_S3_SECRET_ACCESS_KEY
2022/10/10 12:07:36 DEBUG : ceph-s3: detected overridden config - adding "{Qclov}" suffix to name
2022/10/10 12:07:36 DEBUG : Setting s3_access_key_id="..." from environment variable RCLONE_S3_ACCESS_KEY_ID
2022/10/10 12:07:36 DEBUG : Setting s3_secret_access_key="..." from environment variable RCLONE_S3_SECRET_ACCESS_KEY
2022/10/10 12:07:36 DEBUG : fs cache: renaming cache item "ceph-s3:" to be canonical "ceph-s3{Qclov}:"
2022/10/10 12:07:36 INFO  : S3 root: poll-interval is not supported by this remote
2022/10/10 12:07:36 NOTICE: Loaded 0 authorized keys from "/.ssh/authorized_keys"
2022/10/10 12:07:36 DEBUG : Failed to load "/.cache/rclone/serve-sftp/id_rsa": failed to load private key: open /.cache/rclone/serve-sftp/id_rsa: no such file or directory
2022/10/10 12:07:36 NOTICE: Generating 2048 bit key pair at "/.cache/rclone/serve-sftp/id_rsa"
2022/10/10 12:07:36 DEBUG : Loaded private key from "/.cache/rclone/serve-sftp/id_rsa"
2022/10/10 12:07:36 DEBUG : Failed to load "/.cache/rclone/serve-sftp/id_ecdsa": failed to load private key: open /.cache/rclone/serve-sftp/id_ecdsa: no such file or directory
2022/10/10 12:07:36 NOTICE: Generating ECDSA p256 key pair at "/.cache/rclone/serve-sftp/id_ecdsa"
2022/10/10 12:07:36 DEBUG : Loaded private key from "/.cache/rclone/serve-sftp/id_ecdsa"
2022/10/10 12:07:36 DEBUG : Failed to load "/.cache/rclone/serve-sftp/id_ed25519": failed to load private key: open /.cache/rclone/serve-sftp/id_ed25519: no such file or directory
2022/10/10 12:07:36 NOTICE: Generating Ed25519 key pair at "/.cache/rclone/serve-sftp/id_ed25519"
2022/10/10 12:07:36 DEBUG : Loaded private key from "/.cache/rclone/serve-sftp/id_ed25519"
2022/10/10 12:07:36 NOTICE: SFTP server listening on [::]:2022
2022/10/10 12:08:36 INFO  : 
Transferred:   	          0 B / 0 B, -, 0 B/s, ETA -
Elapsed time:       1m0.0s

2022/10/10 12:08:46 DEBUG : serve sftp 25.5.14.93:48122->25.5.247.208:2022: ssh auth "none" from "SSH-2.0-OpenSSH_9.0": ssh: no auth passed yet
2022/10/10 12:08:46 DEBUG : serve sftp 25.5.14.93:48122->25.5.247.208:2022: Password login attempt for admin
2022/10/10 12:08:46 DEBUG : serve sftp 25.5.14.93:48122->25.5.247.208:2022: ssh auth "password" from "SSH-2.0-OpenSSH_9.0": OK
2022/10/10 12:08:46 INFO  : serve sftp 25.5.14.93:48122->25.5.247.208:2022: SSH login from admin using SSH-2.0-OpenSSH_9.0
2022/10/10 12:08:46 DEBUG : serve sftp 25.5.14.93:48122->25.5.247.208:2022: Incoming channel: session
2022/10/10 12:08:46 DEBUG : serve sftp 25.5.14.93:48122->25.5.247.208:2022: Channel accepted
2022/10/10 12:08:46 DEBUG : serve sftp 25.5.14.93:48122->25.5.247.208:2022: Request: subsystem
2022/10/10 12:08:46 DEBUG : serve sftp 25.5.14.93:48122->25.5.247.208:2022: Subsystem: sftp
2022/10/10 12:08:46 DEBUG : serve sftp 25.5.14.93:48122->25.5.247.208:2022:  - accepted: true
2022/10/10 12:08:46 DEBUG : serve sftp 25.5.14.93:48122->25.5.247.208:2022: Starting SFTP server

When trying to access through the SFTP endpoint, none of the files / buckets are accessible.

~ $ lftp -e "set sftp:auto-confirm yes ; open -u admin sftp://rclone:2022;"
Password:
lftp admin@rclone:/> ls
lftp admin@rclone:~> ls test-s3-bucket/
ls: ls test-s3-bucket/: Access failed: file does not exist (test-s3-bucket/)
lftp admin@rclone:/> cat test-s3-bucket/test_file.txt
cat: Access failed: file does not exist (test-s3-bucket/test_file.txt)
lftp admin@rclone:/> 

See output of rclone ls on same remote :

rclone ls ceph-s3://test-s3-bucket
       12 test_file.txt
       12 test_folder/files/test_file.txt

Any idea what could be the issue ?
Thanks

Hi Naralas,

I have no immediate ideas and suggest you try simplifying to narrow in on the issue.

You could e.g. try serving from a local folder like this:

rclone serve sftp ./ --user admin --pass ... --addr :2022 -vv

and then list it with the native SFP command.

If it doesn't work, then try using an explicit IP address, default SFTP port (22), etc.

When the simplified command works, then you can try adding your S3 config, --poll-interval and lftp one thing at the time.

1 Like

hello and welcome to the forum,

might try IPV4, for example,
--addr=127.0.0.1:2022 or --addr=192.168.1.2:2022

1 Like

Thank you both for your replies. I know exposing / accessing the SFTP endpoint works, I tried it again just to make sure.

rclone serve sftp / --user admin --pass ... --addr :2022 -vv

From the client :

lftp -e "set sftp:auto-confirm yes ; open -u admin sftp://rclone:2022; ls"
Password:
drwxr-xr-x    1 0        0               0 Oct 11 08:28 .cache
drwxr-xr-x    1 0        0               0 Apr 29 11:58 bin
drwxr-xr-x    1 0        0               0 Apr 29 12:03 data
...

The issue seems to be in-between the remote and the SFTP layer, which bucket policies are required for the SFTP exposition ?

Thanks, good to know.

I never tried S3, so I can't do detailed S3 troubleshooting, that will have to wait until it becomes day in @asdffdsa 's time zone.

I did however note that you try to serve the top-level (ceph-s3:), what happens if you try to serve a bucket or a (sub)folder in a bucket? e.g. ceph-s3:test-s3-bucket

1 Like

This is really interesting, it actually worked serving directly the bucket :
rclone serve sftp ceph-s3:test-s3-bucket ...

I guess this might be an issue related to the policies on the Ceph cluster which might not allow to list the buckets.

Great, I agree, it sounds like the policies could be the issue.

Perhaps we can get a bit closer, I guess you will also see the issue with this very simple command:

rclone lsd ceph-s3:

If so, then we might be able to see what Ceph cluster responds by using this command:

rclone lsd ceph-s3: --dump headers

or the even more detailed:

rclone lsd ceph-s3: --dump responses

Note: --dump responses most likely contains sensitive info, so please redact if posting.

1 Like

This is really interesting information thank you. It seems like for a single bucket the request sent is a ListBucket whereas it would be a ListAllMyBuckets for top-level, which fails. This supports further the hypothesis of bucket policy issue. Here are both the payloads of the rclone lsd ceph-s3:test-s3-bucket --dump responses and rclone lsd ceph-s3: --dump responses. Both are returning HTTP 200 OK.

<?xml version="1.0" encoding="UTF-8"?>
<ListBucketResult
    xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <Name>test-s3-bucket</Name>
    <Prefix></Prefix>
    <MaxKeys>1000</MaxKeys>
    <Delimiter>/</Delimiter>
    <IsTruncated>false</IsTruncated>
    <CommonPrefixes>
        <Prefix>test_folder/</Prefix>
    </CommonPrefixes>
    <Contents>
        <Key>test_file.txt</Key>
        <LastModified>2022-10-07T06:43:26.617Z</LastModified>
        <ETag>&quot;c865cc004a5d11909ef62ed025c14447&quot;</ETag>
        <Size>12</Size>
        <StorageClass>STANDARD</StorageClass>
        <Owner>
            <ID>redacted</ID>
            <DisplayName>redacted</DisplayName>
        </Owner>
        <Type>Normal</Type>
    </Contents>
    <Marker></Marker>
</ListBucketResult>
<ListAllMyBucketsResult
	xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
	<Owner>
		<ID>redacted</ID>
		<DisplayName>redacted</DisplayName>
	</Owner>
	<Buckets></Buckets>
</ListAllMyBucketsResult>

Perhaps you can find the right setting in this section, it has an example:
https://rclone.org/s3/#s3-permissions

I noted this:

When using the lsd subcommand, the ListAllMyBuckets permission is required.

1 Like

not sure this applies to your case but as compared to the s3 policy in rclone docs,
i use a much more locked down policy, and do not use ListAllMyBuckets
as a result, i need to use --s3-no-check-bucket

as a convenience factor, can use an alias remote to make it easier to access s3 buckets

for example,

[vbo365]
type = alias
remote = wasabi_vbo365_remote:vbo365

[wasabi_vbo365_remote]
type = s3
provider = Wasabi
access_key_id = 
secret_access_key = 
endpoint = s3.us-east-2.wasabisys.com

so rclone ls vbo365:
is equivalent to
rclone ls wasabi_vbo365_remote:vbo365

Hi all,
Sorry for the late reply. I did some further investigation and it seems really that the ListAllMyBuckets permission is needed for the rclone serve sftp command to work.
I also tried your suggestion @asdffdsa with the --s3-no-check-bucket but it seems like I can't access a file even when providing the direct path to it or the bucket (to which I have enough permissions, but ListAllMyBuckets not set). One solution would to expose one bucket directly as a "landing zone" and then moving data around as needed.

Thanks for your comments.

1 Like

If you serve the root, then rclone needs to list all the buckets to create the directory listing of the root

You could also use the combine backend to share just the buckets you need

Something like this which shares only two buckets bucket1 and bucket2 and shouldn't need the list all buckets permission.

[ceph-s3]
type = s3
# as before

[all]
type = combine
upstreams = bucket1=ceph-s3:bucket1 bucket2=ceph-s3:bucket2
1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.