[Solved] IDrive E2 lists bucket as empty

What is the problem you are having with rclone?

Listing contents of an S3 bucket hosted by IDrive E2.
It always returns an empty root folder despite having contents.

I am currently running rclone copy gdrive: idrivee2: in parallel.
In case this is what is causing this behavior, although I don't think this is the culprit.

The bucket is being correctly populated as I see the files showing up on the web interface of IDrive.

Run the command 'rclone version' and share the full output of the command.

rclone v1.64.0

  • os/version: ubuntu 18.04 (64 bit)
  • os/kernel: 4.15.0-191-generic (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.21.1
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

IDrive E2

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone lsd idrivee2:

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

[idrivee2]
type = s3
provider = IDrive
access_key_id = XXX
secret_access_key = XXX
endpoint = stuff.b7j4.par.idrivee2-21.com
no_check_bucket = true

A log from the command that you were trying to run with the -vv flag

Since the flags -vv gave no real help, I decided to skip straight to --dump bodies as recommended on a similar thread.
I also took the liberty of beautifying the dump as well as truncating part of it so it would fit pastebin.

https://pastebin.com/fFF0uhG8

If you need anything else I'll be more than happy to provide it.
Thanks in advance and also for such a great software!

You don't want to put the bucket name in the endpoint. Remove the bucket name from here and put it on the end of the rclone command idrivee2:stuff

yup... that worked.

but then shouldn't the copy command also have failed?
it feels a bit inconsistent, that copying works fine but lsd doesn't.
why is having the bucket in the endpoint ok for copy but not for lsd?

The APIs nearly work with your original config and I think that's because of the way the S3 protocol evolved. However not everything in the S3 SDK does work like that as you've seen.

Thanks for answering my follow-up questions.

One final question though, since I'm already running rclone copy gdrive: idrivee2: I don't need to stop it and start again cause the process is already loaded into memory.
But in case I stop the process and start again, do I now run rclone copy gdrive: idrivee2:stuff/? or no trailing / ?

in that case, rclone will remove the /
a debug log would show
DEBUG : fs cache: renaming cache item "remote:bucket/" to be canonical "remote:bucket"

1 Like

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.