Failed to copy, status code 520

STOP and READ USE THIS TEMPLATE NO EXCEPTIONS - By not using this, you waste your time, our time and really hate puppies. Please remove these two lines and that will confirm you have read them.

What is the problem you are having with rclone?

When uploading a local directory to a remote store "most" files transfer. A few do not. When I try to isolate the issue and upload a single file that failed I get the following error message:

Failed to copy: failed to upload chunk 1 with 52428800 bytes: :
        status code: 520, request id: , host id:

Run the command 'rclone version' and share the full output of the command.

rclone v1.64.2
- os/version: ubuntu 22.04 (64 bit)
- os/kernel: 6.2.0-35-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.21.3
- go/linking: static
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

s3 Compatible / Minio

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copyto "/mnt/local/Media/TV/Show Title/Season 10/Show Title (2015) - S10E05 - Episode Title [WEBDL-1080p h264 EAC3 2.0 8 bit] [Release Group].mkv" "s3crypt:TestMedia/TV/Show Title/Season 10/Show Title (2015) - S10E05 - Episode Title [WEBDL-1080p h264 EAC3 2.0 8 bit] [Release Group].mkv" --low-level-retries 1 --s3-chunk-size 50M --s3-upload-cutoff 100M --no-update-modtime --s3-no-head --checksum --log-file /home/username/28Oct1518.log --log-level DEBUG

I was told to add --low-level-retries 1 --s3-chunk-size 50M --s3-upload-cutoff 100M by my service provider.

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

[s3]
type = s3
provider = Minio
access_key_id = XXX
secret_access_key = XXX
endpoint = https://drive.redacted.tld
acl = bucket-owner-full-control

[s3crypt]
type = crypt
remote = s3:data/personal-files/encrypt
password = XXX
password2 = XXX
filename_encoding = base32768

A log from the command that you were trying to run with the -vv flag

[Paste  log here](https://pastebin.com/88vCAayx)

More info:

I did some more investigating and I believe it's an encoding problem. Here's why.

When I run the command:
rclone backend decode s3crypt: 昬䎸䭛镒肻苖ሇ堷㜿/沫䭺紇ꌤ袁叽覇鑟ဟ/冩剸ᦌꇘ䇶傺ҷ赦ꎟ/赝迎㣓躋鼔䙎ꄐꌏꌟ/漫銡蘒鲝䋉佢ꗕ巿䓞巁棂泺釃䫙拺㲖ꃠ鸊苴ᣆⰎ䭸敺氓㙭㽢墜䫬罫䟠㴻ᛂ㑘䑚畤䨞沉瘜狔噼肌厐梢幛嘔駭Ⲝ☽䩝䰚�鵜梾㣈 描䬒ሣ㝧脑業鬁䬩ᑆ䴹犃蕜ꂊዿɟ -vv --low-level-retries 1

I get the following output:

2023/10/28 15:55:31 DEBUG : rclone: Version "v1.64.2" starting with parameters ["rclone" "backend" "decode" "s3crypt:" "昬䎸䭛镒肻苖ሇ堷㜿/沫䭺紇ꌤ袁叽覇鑟ဟ/冩剸ᦌꇘ䇶傺ҷ赦ꎟ/赝迎㣓躋鼔䙎ꄐꌏꌟ/漫銡蘒鲝䋉佢ꗕ巿䓞巁棂泺釃 䫙拺㲖ꃠ鸊苴ᣆⰎ䭸敺氓㙭㽢墜䫬罫䟠㴻ᛂ㑘䑚畤䨞沉瘜狔噼肌厐梢幛嘔駭Ⲝ☽䩝䰚�鵜梾㣈描䬒ሣ㝧脑業鬁䬩ᑆ䴹犃蕜ꂊዿɟ" "-vv" "--low-level-retries" "1"]
2023/10/28 15:55:31 DEBUG : Using config file from "/home/username/.config/rclone/rclone.conf"
2023/10/28 15:55:31 DEBUG : Creating backend with remote "s3:data/personal-files/encrypt"
2023/10/28 15:55:31 DEBUG : Resolving service "s3" region "us-east-1"
2023/10/28 15:55:32 DEBUG : pacer: low level retry 1/2 (error InternalServerError: Internal Server Error
        status code: 500, request id: 17925D9B984E9CE8, host id: )
2023/10/28 15:55:32 DEBUG : pacer: Rate limited, increasing sleep to 10ms
2023/10/28 15:55:32 DEBUG : pacer: low level retry 2/2 (error InternalServerError: Internal Server Error
        status code: 500, request id: 17925D9BA89B76CD, host id: )
2023/10/28 15:55:32 DEBUG : pacer: Rate limited, increasing sleep to 20ms
2023/10/28 15:55:32 DEBUG : 5 go routines active
2023/10/28 15:55:32 Failed to backend: command "decode" failed: failed to decrypt: 昬䎸䭛镒肻苖ሇ堷㜿/沫䭺紇ꌤ袁叽覇鑟ဟ/冩剸ᦌꇘ䇶傺ҷ赦ꎟ/赝迎㣓躋鼔䙎ꄐꌏꌟ/漫銡蘒鲝䋉佢ꗕ巿䓞巁棂泺釃䫙拺㲖ꃠ鸊苴ᣆⰎ䭸敺氓㙭㽢墜䫬罫䟠㴻ᛂ㑘 䑚畤䨞沉瘜狔噼肌厐梢幛嘔駭Ⲝ☽䩝䰚�鵜梾㣈描䬒ሣ㝧脑業鬁䬩ᑆ䴹犃蕜ꂊዿɟ: illegal base32768 data at input byte 100

can you run base32768 validation on your remote?

rclone test info --check-length --check-base32768 s3:test-info

and share results? It will check every possible base32768 character and max files length.

After you can delete s3:test-info folder

Sorry it should include bucket as well, e.g.:

rclone test info --check-length --check-base32768 s3:data/test-info

1 Like

Sure, thanks for checking in. I don't have root level persmissions on this host so I slightly modified the command

rclone test info --check-length --check-base32768 s3:data/personal-files/test-info --low-level-retries 1

Many errors, but the gist of it is that it seems base32768 is not compatible. I can paste the whole log if needed, but the end summary is:

// s3
maxFileLength = 254 // for 1 byte unicode characters
maxFileLength = 254 // for 2 byte unicode characters
maxFileLength = 254 // for 3 byte unicode characters
maxFileLength = 254 // for 4 byte unicode characters
base32768isOK = false // make sure maxFileLength for 2 byte unicode chars is the same as for 1 byte characters
2023/10/29 12:40:20 Failed to info with 19 errors: last error was: InternalError: We encountered an internal error, please try again.: cause({"Id":"","Code":0,"Detail":"could not retrieve node /user/test-info/test-base32768/0946-鰠鰡鰢鰣鰤鰥鰦鰧鰨鰩鰪鰫鰬鰭鰮鰯鰰鰱鰲鰳鰴鰵鰶鰷鰸鰹鰺鰻鰼鰽鰾鰿.txt","Status":""})
        status code: 500, request id: 1792A1827C6D17BD, host id:

The answer is simple - your remote does not support all characters used by base32768 encoding. This Minio S3 does not allow some Unicode characters in filenames.

Usually S3 compatible should - but in case of Minio it is up to its operator to configure everything correctly.

1 Like

Interesting. Bummer, but that answers the question at least.

Thanks for your time, hopefully base64 will be enough, LOL

Contact your provider - maybe some trivial configuration mistake on their end. Unicode characters should be supported in 2023 without issues - especially ones used by base32768 - they have been chosen very conservatively.

For S3 backend to be safe I suggest to use base64 - you will have to make sure that your filenames are not too long:

roughly it means that for base64 your max file name length will be 160 characters. The best is to test to get exact value.

1 Like

If you or your provider want to investigate further which character(s) is problematic use:

https://pub.rclone.org/base32768.zip

It contains files with all base32768 characters in their names. It should make it easy to identify culprits.

1 Like

Thank you, I'll open a ticket and see if there's anything that can be done.

Any good tools to decode a base32768 string in bytes?

i.e. I have a string: ⍯茨╏檗秕傮ꎚ侭㔿/䧓牐麐䦂哃㢏䊙魞ꊟ/㺤⚁泛顬ሕ䤷䱩韺㕍‛␇㐜ꂈ嘫脳豧算糩ʟ/ޛ癨㮗嶴銩㪆㣐䊀ꉍ槚圣擏神⤔⋉侎ᘪݱ睰ꖐ陮侶酵䠭䀉豞㠽黰梫诔圯碹峾‛␌碼悖㺞璭ᗍ鯆敪嗦㶓䶛櫥㣔䫙㤂僘綨┶Ɵ and know that the error is in byte 18?

I am afraid there are no tools at all:) But every single base32768 character is 2 bytes.

You can use some linux cmd tools:

$ echo '⍯茨╏檗秕傮ꎚ侭㔿' | hexdump -C
00000000  e2 8d af e8 8c a8 e2 95  8f e6 aa 97 e7 a7 95 e5  |................|
00000010  82 ae ea 8e 9a e4 be ad  e3 94 bf 0a              |............|
0000001c
1 Like

Great info, thanks so much!

1 Like

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.