This might sound silly, but I can ls my root folder, but when I go down 1 level I get nothing, I must be doing something dumb here, any pointers are appreciated.
C:\R_Clone>rclone --no-check-certificate lsl mar://nonprod-short/
740471 2019-11-11 08:50:28.647573000 Test2/R_Clone/ZZ_rclone_upload_test todays date.txt
740460 2019-10-17 11:24:34.131737500 Test2/R_Clone/ZZ_rclone_upload_test.txt
866385 2019-10-17 11:24:32.662718500 Test2/R_Clone/rclone.1
C:\R_Clone>rclone --no-check-certificate lsl mar://nonprod-short/Test2/
returns nothing... thanks for any help, -newbie
ncw
(Nick Craig-Wood)
November 11, 2019, 3:19pm
2
It looks like you are doing the right thing...
Which backend are you using?
Can you paste what happens when you do
rclone --no-check-certificate -vv lsl mar:nonprod-short/Test2
Note that you don't need the //
unless you are using FTP/SFTP in which case you should be using just a single /
.
Thanks, it doesn't tell me a whole lot, but here it is:
C:\R_Clone>rclone --no-check-certificate -vv lsl mar:nonprod-short/Test2
2019/11/11 10:39:13 DEBUG : rclone: Version "v1.49.5" starting with parameters ["rclone" "--no-check-certificate" "-vv" "lsl" "mar:nonprod-short/Test2"]
2019/11/11 10:39:13 DEBUG : Using config file from "C:\Users\pem9013\.config\rclone\rclone.conf"
2019/11/11 10:39:13 DEBUG : 5 go routines active
2019/11/11 10:39:13 DEBUG : rclone: Version "v1.49.5" finishing with parameters ["rclone" "--no-check-certificate" "-vv" "lsl" "mar:nonprod-short/Test2"]
I appreciate the extra set of eyes, not sure why it isn't working for me...
Thanks
sorry, forgot to mention it is an on premise S3
I'm trying this from Windows, but also tried this from a docker container on Linux and can't get it to work there either, ls at the root works there too. Not a big problem since I can see all my files/folders, would just be nice to have this working.
thank you,
C:\R_Clone>rclone version
rclone v1.49.5
os/arch: windows/amd64
go version: go1.12.10
thank you, trying to get an answer for #1 , I'll update to the latest version too.
I have a project to migrate 2 petabytes to our S3, super happy that rclone keeps the timestamps, that's a life saver.
hello, it is a NetApp StorageGRID, I think as object storage I'm not thinking about this the right way, used to dealing with normal file system directories. I can view my files from the root, so i'm ok for now, thanks for the help!
ncw
(Nick Craig-Wood)
November 11, 2019, 9:46pm
9
There is something a little fishy going on... Perhaps a not 100% compliant S3 interface maybe?
It might also be worth trying --fast-list
and or --disable ListR
to see if those do anything different!
Thanks Nick, --disable ListR did the trick, not too sure how compliant our storage is, but for whatever reason this worked!
much appreciated.
-Mark
ncw
(Nick Craig-Wood)
November 12, 2019, 3:49pm
11
That is good! You can use --disable ListR
with all the rclone commands so hopefully it will be a good enough work around.
system
(system)
Closed
November 15, 2019, 3:49pm
12
This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.