Why `s3` backend ignores http 301 errors

STOP and READ USE THIS TEMPLATE NO EXCEPTIONS - By not using this, you waste your time, our time and really hate puppies. Please remove these two lines and that will confirm you have read them.

What is the problem you are having with rclone?

When I use the fs api to list files under buckets in other region, I get an empty directory instead of an error. (I expected to get BucketRegionError
The following code will ignore 301 errors. Why should 301 be treated specially?

Run the command 'rclone version' and share the full output of the command.

None.

Which cloud storage system are you using? (eg Google Drive)

AWS S3

The command you were trying to run (eg rclone copy /tmp remote:tmp)

None.

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

None.

A log from the command that you were trying to run with the -vv flag

None.

What is supposed to happen is that there is a redirect to the correct region.

I can't remember exactly how this is supposed to work, but if you want to see what is going on then use -vv --dump headers

Thanks for your reply. I don't use the rclone command line. I develop my own application using the file system api provided by rclone.

The key code looks like this:

fs, err := fs.NewFs(ctx, "s3:")
checkErr(err)
fs1 := vfs.New(fs, nil)
dirs, err = fs1.ReadDir("/bucket-in-other-region")
assertIsBucketRegionError(err) // but here err is nil and `dirs` is empty (in fact, there are files in `bucket-in-other-region`,)

That is the reason it doesn't follow the redirect, because you are listing all buckets.

Rclone could do it, but it would make the code more complicated.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.