Size command sometimes fails

STOP and READ USE THIS TEMPLATE NO EXCEPTIONS - By not using this, you waste your time, our time and really hate puppies. Please remove these two lines and that will confirm you have read them.

What is the problem you are having with rclone?

When trying to get a size (Expected size is approx 280 GB) I sometimes get an error, and sometimes get the correct response.

I ran the command to get the following log.

I ran it again now, and got expected output:

Total objects: 780 (780)
Total size: 238.124 GiB (255684029473 Byte)

Run the command 'rclone version' and share the full output of the command.

rclone v1.64.2
- os/version: ubuntu 22.04 (64 bit)
- os/kernel: 6.2.0-35-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.21.3
- go/linking: static
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

s3 / Minio

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone size s3:MediaTest03Nov --low-level-retries 1 --log-file ~/size.log --log-level DEBUG

(I was told to include --low-level-retries by my provider)

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

[remote]
type = s3
provider = Minio
access_key_id = XXX
secret_access_key = XXX
endpoint = https://drive.remote.tld
acl = bucket-owner-full-control
upload_cutoff = 100Mi
chunk_size = 50Mi

[s3]
type = alias
remote = remote:data/personal-files

A log from the command that you were trying to run with the -vv flag

2023/11/03 15:54:52 DEBUG : rclone: Version "v1.64.2" starting with parameters ["rclone" "size" "s3:MediaTest03Nov" "--low-level-retries" "1" "--log-file" "/home/user/size.log" "--log-level" "DEBUG"]
2023/11/03 15:54:52 DEBUG : Creating backend with remote "s3:MediaTest03Nov"
2023/11/03 15:54:52 DEBUG : Using config file from "/home/user/.config/rclone/rclone.conf"
2023/11/03 15:54:52 DEBUG : Creating backend with remote "remote:data/personal-files/MediaTest03Nov"
2023/11/03 15:54:52 DEBUG : Resolving service "s3" region "us-east-1"
2023/11/03 15:54:53 DEBUG : pacer: low level retry 1/2 (error InternalServerError: Internal Server Error
	status code: 500, request id: 1794350DFB795F57, host id: )
2023/11/03 15:54:53 DEBUG : pacer: Rate limited, increasing sleep to 10ms
2023/11/03 15:54:54 DEBUG : pacer: low level retry 2/2 (error InternalServerError: Internal Server Error
	status code: 500, request id: 1794350E20CCB23D, host id: )
2023/11/03 15:54:54 DEBUG : pacer: Rate limited, increasing sleep to 20ms
2023/11/03 15:54:54 DEBUG : fs cache: renaming cache item "s3:MediaTest03Nov" to be canonical "remote:data/personal-files/MediaTest03Nov"
2023/11/03 15:56:34 DEBUG : pacer: Reducing sleep to 15ms
2023/11/03 15:56:34 DEBUG : 5 go routines active
2023/11/03 15:56:34 Failed to size: : 
	status code: 524, request id: , host id: 

it is normal to get pacer messages, that is why --low-level-retries defaults to 10

strange that a provider would tell you to use --low-level-retries 1 and then they pacer you after that.

Yeah, the pacer isn't the issue so much as the

Failed to size: : 
	status code: 524, request id: , host id: 

It's intermittent though, so I was wondering if there was some troubleshooting or other settings I could look at to avoid the problem in the future.

and what exactly is the provider???

are you using cloudflare R2?

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.