Can't ls to sub directory? sorry, total newbie here

This might sound silly, but I can ls my root folder, but when I go down 1 level I get nothing, I must be doing something dumb here, any pointers are appreciated.

C:\R_Clone>rclone --no-check-certificate lsl mar://nonprod-short/
740471 2019-11-11 08:50:28.647573000 Test2/R_Clone/ZZ_rclone_upload_test todays date.txt
740460 2019-10-17 11:24:34.131737500 Test2/R_Clone/ZZ_rclone_upload_test.txt
866385 2019-10-17 11:24:32.662718500 Test2/R_Clone/rclone.1

C:\R_Clone>rclone --no-check-certificate lsl mar://nonprod-short/Test2/

returns nothing... thanks for any help, -newbie

It looks like you are doing the right thing...

Which backend are you using?

Can you paste what happens when you do

rclone --no-check-certificate -vv lsl mar:nonprod-short/Test2

Note that you don't need the // unless you are using FTP/SFTP in which case you should be using just a single /.

Thanks, it doesn't tell me a whole lot, but here it is:

C:\R_Clone>rclone --no-check-certificate -vv lsl mar:nonprod-short/Test2
2019/11/11 10:39:13 DEBUG : rclone: Version "v1.49.5" starting with parameters ["rclone" "--no-check-certificate" "-vv" "lsl" "mar:nonprod-short/Test2"]
2019/11/11 10:39:13 DEBUG : Using config file from "C:\Users\pem9013\.config\rclone\rclone.conf"
2019/11/11 10:39:13 DEBUG : 5 go routines active
2019/11/11 10:39:13 DEBUG : rclone: Version "v1.49.5" finishing with parameters ["rclone" "--no-check-certificate" "-vv" "lsl" "mar:nonprod-short/Test2"]

I appreciate the extra set of eyes, not sure why it isn't working for me...


sorry, forgot to mention it is an on premise S3

I'm trying this from Windows, but also tried this from a docker container on Linux and can't get it to work there either, ls at the root works there too. Not a big problem since I can see all my files/folders, would just be nice to have this working.

thank you,

C:\R_Clone>rclone version
rclone v1.49.5

  • os/arch: windows/amd64
  • go version: go1.12.10


  1. what is the name of the on premise s3?
  2. you are using an old version, please update to the latest version.

thank you, trying to get an answer for #1, I'll update to the latest version too.

I have a project to migrate 2 petabytes to our S3, super happy that rclone keeps the timestamps, that's a life saver.

hello, it is a NetApp StorageGRID, I think as object storage I'm not thinking about this the right way, used to dealing with normal file system directories. I can view my files from the root, so i'm ok for now, thanks for the help!

There is something a little fishy going on... Perhaps a not 100% compliant S3 interface maybe?

It might also be worth trying --fast-list and or --disable ListR to see if those do anything different!

Thanks Nick, --disable ListR did the trick, not too sure how compliant our storage is, but for whatever reason this worked!

much appreciated.


That is good! You can use --disable ListR with all the rclone commands so hopefully it will be a good enough work around.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.