Improper handling of "/" in S3

What is the problem you are having with rclone?

Improper handling of "/". Basically we have a folder(label) / within our bucket, which within hat we then have a foldername that I'll refer to as bleh. Although unintuitive this was done for a specific compatibility reason. It looks like at some point rclone ls was fixed as the original version I had couldn't ls proper either, but currently on the latest version copy results in the following:

2023/10/09 17:28:10 INFO : There was nothing to transfer

This does work correctly using s3cmd.

Run the command 'rclone version' and share the full output of the command.

rclone v1.64.0

  • os/version: debian 11.5 (64 bit)
  • os/kernel: 5.10.0-18-amd64 (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.21.1
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

S3

The command you were trying to run (eg rclone copy /tmp remote:tmp)

note the double //

./rclone -v copy s3:/XXX//bleh/bleh/bleh.file /root/bleh.file

The rclone config contents with secrets removed.

[s3]
type = s3
provider = AWS
(secrets/access removed)

A log from the command with the -vv flag

2023/10/09 17:53:12 DEBUG : rclone: Version "v1.64.0" starting with parameters ["./rclone" "-vv" "copy" "s3:/XXX//bleh/bleh/bleh.file" "/root/bleh.file"]
2023/10/09 17:53:12 DEBUG : Creating backend with remote "s3:/XXX//bleh/bleh/bleh.file
2023/10/09 17:53:12 DEBUG : Using config file from "/root/.config/rclone/rclone.conf"
2023/10/09 17:53:12 DEBUG : fs cache: renaming cache item "s3:/XXX//bleh/bleh/bleh.file" to be canonical "s3:/XXX//bleh/bleh/bleh.file"
2023/10/09 17:53:12 DEBUG : Creating backend with remote "/root/bleh.file"
2023/10/09 17:53:12 DEBUG : Local file system at /root/bleh.file: Waiting for checks to finish
2023/10/09 17:53:12 DEBUG : Local file system at /root/bleh.file: Waiting for transfers to finish
2023/10/09 17:53:12 INFO  : There was nothing to transfer
2023/10/09 17:53:12 INFO  : 
Transferred:   	          0 B / 0 B, -, 0 B/s, ETA -
Elapsed time:         0.1s

2023/10/09 17:53:12 DEBUG : 6 go routines active
XXX//bleh/bleh/bleh.file

If the special character is part of your file and/or directory name, you'd have to escape it:

[felix@gemini test]$ mkdir "dir\dir"
[felix@gemini test]$ mv blah\\test.txt dir\\dir/
[felix@gemini test]$ ls
'dir\dir'
[felix@gemini test]$ rclone ls /home/felix/test
        0 dir\dir/blah\test.txt

[felix@gemini test]$ rclone ls /home/felix/test
        0 dir\dir/blah\test.txt
[felix@gemini test]$ rclone ls /home/felix/test/dir\\dir/
        0 blah\test.txt

I am not sure on Linux even how to make that happen though with a forward slash. You have to get funky and use the unicode or something as I've never seen that.

This isn't supported at the moment as you can't make a valid path from XXX//bleh/bleh/bleh.file. There is at least one issue about this with some ideas for fixes but I can't find them at the moment!

That's unfortunate, I was hopeful given "ls" began working on one of the newer versions where previously it would just return a similar error. Given this is supported via the API and all the aws command tools I was surprised to see it not working here. Based on your feedback I take it this would not be a straight forward fix for rclone?

Thank you!

maybe this is the issue?
S3: Prefixes with '//' causes rclone commands to fail / look for the wrong objects · Issue #5858 · rclone/rclone · GitHub

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.