S3's "NoSuchUpload" error

What is the problem you are having with rclone?

Certain files won't upload with the S3 backend (MinIO) wrapped with rclone crypt.

Run the command 'rclone version' and share the full output of the command.

rclone v1.69.0-beta.8363.4db09331c
- os/version: Microsoft Windows 11 Pro 23H2 (64 bit)
- os/kernel: 10.0.22631.4169 (x86_64)
- os/type: windows
- os/arch: amd64
- go/version: go1.23.2
- go/linking: static
- go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

S3 (MinIO)

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone --config rclone-hot-storage.conf copy "c:\Users\Araki\Desktop\temp\Backups\PCs\Laptop Lenovo C340-11\bionic-20211121-2249.tar.gz" "s3_crypt:Backups\PCs\Laptop Lenovo C340-11\bionic-20211121-2249.tar.gz" -P -vvv

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

[s3]
type = s3
provider = Minio
access_key_id = XXX
secret_access_key = XXX
endpoint = XXX
location_constraint = XXX
acl = private
region = XXX

[s3_crypt]
type = crypt
remote = s3:rclone-data/encrypted
password = XXX
password2 = XXX
filename_encoding = base32768

Despite the forum's warning, I'm not submitting the endpoint publicly in plain text

A log from the command that you were trying to run with the -vv flag

2024/10/04 19:48:50 DEBUG : rclone: Version "v1.69.0-beta.8363.4db09331c" starting with parameters ["rclone" "--config" "rclone-hot-storage.conf" "copy" "c:\\Users\\Araki\\Desktop\\temp\\Backups\\PCs\\Laptop Lenovo C340-11\\bionic-20211121-2249.tar.gz" "s3_crypt:Backups\\PCs\\Laptop Lenovo C340-11\\bionic-20211121-2249.tar.gz" "-P" "-vvv"]
2024/10/04 19:48:50 DEBUG : Creating backend with remote "c:\\Users\\Araki\\Desktop\\temp\\Backups\\PCs\\Laptop Lenovo C340-11\\bionic-20211121-2249.tar.gz"
2024/10/04 19:48:50 DEBUG : Using config file from "V:\\rclone\\rclone-hot-storage.conf"
2024/10/04 19:48:50 DEBUG : fs cache: renaming child cache item "c:\\Users\\Araki\\Desktop\\temp\\Backups\\PCs\\Laptop Lenovo C340-11\\bionic-20211121-2249.tar.gz" to be canonical for parent "//?/c:/Users/Araki/Desktop/temp/Backups/PCs/Laptop Lenovo C340-11"
2024/10/04 19:48:50 DEBUG : Creating backend with remote "s3_crypt:Backups\\PCs\\Laptop Lenovo C340-11\\bionic-20211121-2249.tar.gz"
2024/10/04 19:48:50 DEBUG : Creating backend with remote "s3:rclone-data/encrypted/痠毕獾潓驡胁貥篜俟/䝩辦ဌ稨㢫缽㜘唍駟/ဖꆎꑬ嫐ꙓތꏪ材纎㲁鮼為靾柑㮂铊砄ɟ/䠼睇㭚嗜龛㣴ꍄ鉏␚桎䜙覸爯╜橩劓憻ɟ"
2024/10/04 19:48:50 DEBUG : fs cache: renaming cache item "s3_crypt:Backups\\PCs\\Laptop Lenovo C340-11\\bionic-20211121-2249.tar.gz" to be canonical "s3_crypt:Backups/PCs/Laptop Lenovo C340-11/bionic-20211121-2249.tar.gz"
2024/10/04 19:48:50 DEBUG : bionic-20211121-2249.tar.gz: Need to transfer - File not found at Destination
2024/10/04 19:48:50 DEBUG : bionic-20211121-2249.tar.gz: Computing md5 hash of encrypted source
2024/10/04 19:48:57 DEBUG : 䠼睇㭚嗜龛㣴ꍄ鉏␚桎䜙覸爯╜橩劓憻ɟ: open chunk writer: started multipart upload: NGI1YzhjNGUtZDdhYS00OTBiLTliNDEtZWNlYTYwYTg5NjA1LjAzMGQwODFmLWVmZjgtNDU5ZC1iN2UzLThhNDA1ODNmOGE5M3gxNzI4MDYwNTM2NDM5OTI5Njc0
2024/10/04 19:48:57 DEBUG : bionic-20211121-2249.tar.gz: multipart upload: starting chunk 0 size 5Mi offset 0/2.875Gi
2024/10/04 19:48:57 DEBUG : bionic-20211121-2249.tar.gz: multipart upload: starting chunk 1 size 5Mi offset 5Mi/2.875Gi
2024/10/04 19:48:57 DEBUG : bionic-20211121-2249.tar.gz: multipart upload: starting chunk 2 size 5Mi offset 10Mi/2.875Gi
2024/10/04 19:48:57 DEBUG : bionic-20211121-2249.tar.gz: multipart upload: starting chunk 3 size 5Mi offset 15Mi/2.875Gi
2024/10/04 19:48:57 DEBUG : bionic-20211121-2249.tar.gz: Cancelling multipart upload
2024/10/04 19:48:57 DEBUG : 䠼睇㭚嗜龛㣴ꍄ鉏␚桎䜙覸爯╜橩劓憻ɟ: multipart upload "NGI1YzhjNGUtZDdhYS00OTBiLTliNDEtZWNlYTYwYTg5NjA1LjAzMGQwODFmLWVmZjgtNDU5ZC1iN2UzLThhNDA1ODNmOGE5M3gxNzI4MDYwNTM2NDM5OTI5Njc0" aborted
2024/10/04 19:48:57 ERROR : bionic-20211121-2249.tar.gz: Failed to copy: failed to upload chunk 1 with 5242880 bytes: operation error S3: UploadPart, https response error StatusCode: 404, RequestID: 17FB4DE6FF8D2EEC, HostID: e8a951b3eb570bf9fcd6fabcdc8fdf3f745c8f289f3c8a42e09005c3cfde6602, api error NoSuchUpload: The specified multipart upload does not exist. The upload ID may be invalid, or the upload may have been aborted or completed.
2024/10/04 19:48:57 ERROR : Attempt 1/3 failed with 1 errors and: failed to upload chunk 1 with 5242880 bytes: operation error S3: UploadPart, https response error StatusCode: 404, RequestID: 17FB4DE6FF8D2EEC, HostID: e8a951b3eb570bf9fcd6fabcdc8fdf3f745c8f289f3c8a42e09005c3cfde6602, api error NoSuchUpload: The specified multipart upload does not exist. The upload ID may be invalid, or the upload may have been aborted or completed.
2024/10/04 19:48:57 DEBUG : bionic-20211121-2249.tar.gz: Need to transfer - File not found at Destination
2024/10/04 19:48:57 DEBUG : bionic-20211121-2249.tar.gz: Computing md5 hash of encrypted source
2024/10/04 19:48:58 INFO  : Signal received: interrupt
2024/10/04 19:48:58 INFO  : Exiting...

Info

Seemingly, the relevant part is here:

2024/10/04 19:48:57 ERROR : bionic-20211121-2249.tar.gz: Failed to copy: failed to upload chunk 1 with 5242880 bytes: operation error S3: UploadPart, https response error StatusCode: 404, RequestID: 17FB4DE6FF8D2EEC, HostID: e8a951b3eb570bf9fcd6fabcdc8fdf3f745c8f289f3c8a42e09005c3cfde6602, api error NoSuchUpload: The specified multipart upload does not exist. The upload ID may be invalid, or the upload may have been aborted or completed.
2024/10/04 19:48:57 ERROR : Attempt 1/3 failed with 1 errors and: failed to upload chunk 1 with 5242880 bytes: operation error S3: UploadPart, https response error StatusCode: 404, RequestID: 17FB4DE6FF8D2EEC, HostID: e8a951b3eb570bf9fcd6fabcdc8fdf3f745c8f289f3c8a42e09005c3cfde6602, api error NoSuchUpload: The specified multipart upload does not exist. The upload ID may be invalid, or the upload may have been aborted or completed.

When running rclone move, there are about 50 files in total that report the same issue, all relatively big in size (the smallest I believe is about ~300MB). Initially, I thought the files turned corrupted somehow when I was transfering them using the sftp backend with an outdated version of rclone, but that turned out not to be the case once I localized it (though it still might've broken something in the bucket?).

Running rclone backend --config rclone-hot-storage.conf list-multipart-uploads s3: returns this:

{
        "rclone-data": []
}

So does aws cli.

On the backend side, MinIO also can't find anything related to aborted/incompleted files with ./mc ls --recursive --incomplete /home/minio/minio/rclone-data/ (no output).

And tracing while uploading returns this:

Weirdly, just changing the filename fixes the issue, but I'd like to fix it without renaming the files or creating a new bucket if that's possible. Is there anything I can do about this?

Thanks.

welcome to the forum,

is there a specific reason to test using a beta, versus stable?


fwiw, rclone copy expects the dest to be a folder, not a file
else use rclone copyto

Hello, I switched to the beta version when I was trying to fix the issue on my own. The stable version has the same behavior. Here's a double-check:

rclone v1.68.1
- os/version: Microsoft Windows 11 Pro 23H2 (64 bit)
- os/kernel: 10.0.22631.4169 (x86_64)
- os/type: windows
- os/arch: amd64
- go/version: go1.23.1
- go/linking: static
- go/tags: cmount

This time, with copyto:

rclone --config rclone-hot-storage.conf copyto "c:\Users\Araki\Desktop\temp\Backups\PCs\Laptop Lenovo C340-11\bionic-20211121-2249.tar.gz" "s3_crypt:Backups\PCs\Laptop Lenovo C340-11\bionic-20211121-2249.tar.gz" -P -vvv

2024/10/04 20:31:51 DEBUG : rclone: Version "v1.68.1" starting with parameters ["rclone" "--config" "rclone-hot-storage.conf" "copyto" "c:\\Users\\Araki\\Desktop\\temp\\Backups\\PCs\\Laptop Lenovo C340-11\\bionic-20211121-2249.tar.gz" "s3_crypt:Backups\\PCs\\Laptop Lenovo C340-11\\bionic-20211121-2249.tar.gz" "-P" "-vvv"]
2024/10/04 20:31:51 DEBUG : Creating backend with remote "c:\\Users\\Araki\\Desktop\\temp\\Backups\\PCs\\Laptop Lenovo C340-11\\bionic-20211121-2249.tar.gz"
2024/10/04 20:31:51 DEBUG : Using config file from "V:\\rclone\\rclone-hot-storage.conf"
2024/10/04 20:31:51 DEBUG : fs cache: adding new entry for parent of "c:\\Users\\Araki\\Desktop\\temp\\Backups\\PCs\\Laptop Lenovo C340-11\\bionic-20211121-2249.tar.gz", "//?/c:/Users/Araki/Desktop/temp/Backups/PCs/Laptop Lenovo C340-11"
2024/10/04 20:31:51 DEBUG : Creating backend with remote "s3_crypt:Backups/PCs/Laptop Lenovo C340-11/"
2024/10/04 20:31:51 DEBUG : Creating backend with remote "s3:rclone-data/encrypted/痠毕獾潓驡胁貥篜俟/䝩辦ဌ稨㢫缽㜘唍駟/ဖꆎꑬ嫐ꙓތꏪ材纎㲁鮼為靾柑㮂铊砄ɟ"
2024/10/04 20:31:51 DEBUG : bionic-20211121-2249.tar.gz: Need to transfer - File not found at Destination
2024/10/04 20:31:52 DEBUG : bionic-20211121-2249.tar.gz: Computing md5 hash of encrypted source
2024/10/04 20:31:58 DEBUG : 䠼睇㭚嗜龛㣴ꍄ鉏␚桎䜙覸爯╜橩劓憻ɟ: open chunk writer: started multipart upload: NGI1YzhjNGUtZDdhYS00OTBiLTliNDEtZWNlYTYwYTg5NjA1LjA3ZDFmMDZiLTViZTctNDVkZi1iYjZmLTM3YTYyZTg1YzE4MngxNzI4MDYzMTE3NzMxMDgwNDcy
2024/10/04 20:31:58 DEBUG : bionic-20211121-2249.tar.gz: multipart upload: starting chunk 0 size 5Mi offset 0/2.875Gi
2024/10/04 20:31:58 DEBUG : bionic-20211121-2249.tar.gz: multipart upload: starting chunk 1 size 5Mi offset 5Mi/2.875Gi
2024/10/04 20:31:58 DEBUG : bionic-20211121-2249.tar.gz: multipart upload: starting chunk 2 size 5Mi offset 10Mi/2.875Gi
2024/10/04 20:31:58 DEBUG : bionic-20211121-2249.tar.gz: multipart upload: starting chunk 3 size 5Mi offset 15Mi/2.875Gi
2024/10/04 20:31:58 DEBUG : bionic-20211121-2249.tar.gz: Cancelling multipart upload
2024/10/04 20:31:58 DEBUG : 䠼睇㭚嗜龛㣴ꍄ鉏␚桎䜙覸爯╜橩劓憻ɟ: multipart upload "NGI1YzhjNGUtZDdhYS00OTBiLTliNDEtZWNlYTYwYTg5NjA1LjA3ZDFmMDZiLTViZTctNDVkZi1iYjZmLTM3YTYyZTg1YzE4MngxNzI4MDYzMTE3NzMxMDgwNDcy" aborted
2024/10/04 20:31:58 ERROR : bionic-20211121-2249.tar.gz: Failed to copy: failed to upload chunk 1 with 5242880 bytes: operation error S3: UploadPart, https response error StatusCode: 404, RequestID: 17FB5040033B04B7, HostID: e8a951b3eb570bf9fcd6fabcdc8fdf3f745c8f289f3c8a42e09005c3cfde6602, api error NoSuchUpload: The specified multipart upload does not exist. The upload ID may be invalid, or the upload may have been aborted or completed.
2024/10/04 20:31:58 ERROR : Attempt 1/3 failed with 1 errors and: failed to upload chunk 1 with 5242880 bytes: operation error S3: UploadPart, https response error StatusCode: 404, RequestID: 17FB5040033B04B7, HostID: e8a951b3eb570bf9fcd6fabcdc8fdf3f745c8f289f3c8a42e09005c3cfde6602, api error NoSuchUpload: The specified multipart upload does not exist. The upload ID may be invalid, or the upload may have been aborted or completed.
2024/10/04 20:31:58 DEBUG : bionic-20211121-2249.tar.gz: Need to transfer - File not found at Destination
2024/10/04 20:31:58 DEBUG : bionic-20211121-2249.tar.gz: Computing md5 hash of encrypted source
2024/10/04 20:31:59 INFO  : Signal received: interrupt
2024/10/04 20:31:59 INFO  : Exiting...

And with mount also:

rclone --config rclone-hot-storage.conf mount s3_crypt: S: --vfs-cache-mode full -vvv

2024/10/04 20:34:51 DEBUG : Backups/PCs/Laptop Lenovo C340-11/bionic-20211121-2249.tar.gz: vfs cache: starting upload
2024/10/04 20:34:51 DEBUG : Backups/PCs/Laptop Lenovo C340-11/bionic-20211121-2249.tar.gz: Computing md5 hash of encrypted source
2024/10/04 20:34:57 DEBUG : 痠毕獾潓驡胁貥篜俟/䝩辦ဌ稨㢫缽㜘唍駟/ဖꆎꑬ嫐ꙓތꏪ材纎㲁鮼為靾柑㮂铊砄ɟ/䠼睇㭚嗜龛㣴ꍄ鉏␚桎䜙 覸爯╜橩劓憻ɟ: open chunk writer: started multipart upload: NGI1YzhjNGUtZDdhYS00OTBiLTliNDEtZWNlYTYwYTg5NjA1LjJiZDgxYmE0LTdjN2YtNDFmMC04YWZjLTFkMTY2Y2Y5MGE2Y3gxNzI4MDYzMjk2NjU4NDQ1NDg4
2024/10/04 20:34:57 DEBUG : Backups/PCs/Laptop Lenovo C340-11/bionic-20211121-2249.tar.gz: multipart upload: starting chunk 0 size 5Mi offset 0/2.875Gi
2024/10/04 20:34:57 DEBUG : Backups/PCs/Laptop Lenovo C340-11/bionic-20211121-2249.tar.gz: multipart upload: starting chunk 1 size 5Mi offset 5Mi/2.875Gi
2024/10/04 20:34:57 DEBUG : Backups/PCs/Laptop Lenovo C340-11/bionic-20211121-2249.tar.gz: multipart upload: starting chunk 2 size 5Mi offset 10Mi/2.875Gi
2024/10/04 20:34:57 DEBUG : Backups/PCs/Laptop Lenovo C340-11/bionic-20211121-2249.tar.gz: multipart upload: starting chunk 3 size 5Mi offset 15Mi/2.875Gi
2024/10/04 20:34:57 DEBUG : Backups/PCs/Laptop Lenovo C340-11/bionic-20211121-2249.tar.gz: Cancelling multipart upload
2024/10/04 20:34:57 DEBUG : 痠毕獾潓驡胁貥篜俟/䝩辦ဌ稨㢫缽㜘唍駟/ဖꆎꑬ嫐ꙓތꏪ材纎㲁鮼為靾柑㮂铊砄ɟ/䠼睇㭚嗜龛㣴ꍄ鉏␚桎䜙 覸爯╜橩劓憻ɟ: multipart upload "NGI1YzhjNGUtZDdhYS00OTBiLTliNDEtZWNlYTYwYTg5NjA1LjJiZDgxYmE0LTdjN2YtNDFmMC04YWZjLTFkMTY2Y2Y5MGE2Y3gxNzI4MDYzMjk2NjU4NDQ1NDg4" aborted
2024/10/04 20:34:57 ERROR : Backups/PCs/Laptop Lenovo C340-11/bionic-20211121-2249.tar.gz: Failed to copy: failed to upload chunk 1 with 5242880 bytes: operation error S3: UploadPart, https response error StatusCode: 404, RequestID: 17FB5069A9BB55FA, HostID: e8a951b3eb570bf9fcd6fabcdc8fdf3f745c8f289f3c8a42e09005c3cfde6602, api error NoSuchUpload: The specified multipart upload does not exist. The upload ID may be invalid, or the upload may have been aborted or completed.
2024/10/04 20:34:57 ERROR : Backups/PCs/Laptop Lenovo C340-11/bionic-20211121-2249.tar.gz: vfs cache: failed to upload try #1, will retry in 10s: vfs cache: failed to transfer file from cache to remote: failed to upload chunk 1 with 5242880 bytes: operation error S3: UploadPart, https response error StatusCode: 404, RequestID: 17FB5069A9BB55FA, HostID: e8a951b3eb570bf9fcd6fabcdc8fdf3f745c8f289f3c8a42e09005c3cfde6602, api error NoSuchUpload: The specified multipart upload does not exist. The upload ID may be invalid, or the upload may have been aborted or completed.

fwiw, best not to test with mount, just adds a layer of complexity.


filename_encoding = base32768 is "experimental"

from rclone docs

  1. "For cloud storage systems using UTF-16 to store file names internally (e.g. OneDrive, Dropbox, Box), base32768 can be used to drastically reduce file name length."
    and
  2. S3 allows any valid UTF-8 string as a key.

so not sure base32768 is the best choice for s3

@kapitainsky, what do you think?

I have never used base32768 on S3 remotes - not sure why now. But for sure had some reasons:)

The best is always to check. There is rclone command or flag for everything:

rclone test info --check-length --check-base32768 remote:dir

replace remote:dir with real values.

It will test every single of (surprise:)) 32768 characters used for this encoding. On slow remotes it takes some time.

Minio IMO often yields strange results - as its overall behaviour strongly depends on undelying file system and skills of an administrator.

I've been thinking if I even need base32768, but after seeing that even with base32768 some files on the rclone serve sftp backend from a few days ago refuse to upload because of the filename length before shortening them slightly made me stick with base32768. I suppose with the default encoding the amount of filename-related issues would be greater, right?

It's not like they were too lengthy, though, some had maybe about 120 latin characters and ~1-2 special ones like ; and +. A very rough recall.

Sorry, do I run the test command for the remote (s3) or for the crypt wrapper (s3_crypt)?

rclone --config rclone-hot-storage.conf test info --check-length --check-base32768 "s3_crypt:Backups/PCs/Laptop Lenovo C340-11"
2024/10/04 22:27:22 EME operates on 1 to 128 block-cipher blocks, you passed 513
2024/10/04 22:27:22 EME operates on 1 to 128 block-cipher blocks, you passed 257
2024/10/04 22:27:22 EME operates on 1 to 128 block-cipher blocks, you passed 129
2024/10/04 22:27:24 EME operates on 1 to 128 block-cipher blocks, you passed 1025
2024/10/04 22:27:24 EME operates on 1 to 128 block-cipher blocks, you passed 513
2024/10/04 22:27:24 EME operates on 1 to 128 block-cipher blocks, you passed 257
2024/10/04 22:27:24 EME operates on 1 to 128 block-cipher blocks, you passed 129
2024/10/04 22:27:26 EME operates on 1 to 128 block-cipher blocks, you passed 1537
2024/10/04 22:27:26 EME operates on 1 to 128 block-cipher blocks, you passed 769
2024/10/04 22:27:26 EME operates on 1 to 128 block-cipher blocks, you passed 385
2024/10/04 22:27:26 EME operates on 1 to 128 block-cipher blocks, you passed 193
2024/10/04 22:27:27 EME operates on 1 to 128 block-cipher blocks, you passed 2049
2024/10/04 22:27:27 EME operates on 1 to 128 block-cipher blocks, you passed 1025
2024/10/04 22:27:27 EME operates on 1 to 128 block-cipher blocks, you passed 513
2024/10/04 22:27:27 EME operates on 1 to 128 block-cipher blocks, you passed 257
2024/10/04 22:27:27 EME operates on 1 to 128 block-cipher blocks, you passed 129
2024/10/04 22:28:22 NOTICE: Encrypted drive 's3_crypt:Backups/PCs/Laptop Lenovo C340-11/rclone-test-info-benozid3/test-base32768': 0 differences found
2024/10/04 22:28:22 NOTICE: Encrypted drive 's3_crypt:Backups/PCs/Laptop Lenovo C340-11/rclone-test-info-benozid3/test-base32768': 1028 hashes could not be checked
2024/10/04 22:28:22 NOTICE: Encrypted drive 's3_crypt:Backups/PCs/Laptop Lenovo C340-11/rclone-test-info-benozid3/test-base32768': 1028 matching files
// s3_crypt
maxFileLength = 143 // for 1 byte unicode characters
maxFileLength = 78 // for 2 byte unicode characters
maxFileLength = 47 // for 3 byte unicode characters
maxFileLength = 35 // for 4 byte unicode characters
base32768isOK = true // make sure maxFileLength for 2 byte unicode chars is the same as for 1 byte characters

Maybe I really should create a new bucket with the default filename_encoding and test if it would give me any trouble.

MinIO buckets are on the XFS filesystem btw.

It looks that base32768 is supported. Not perfect as your system counts bytes not characters but still you should see some benefit over base64 (minimal though). You can encode 15 bits on two bytes vs 6 bits per byte.

but supported files' names length is strangely short. I have no experience with XFS but can't believe it is XFS limit. Maybe you should look into your minio configuration?

For comparison this is another S3 provider (iDrive). I think they use minio too as I saw some minio error messages one day:

// iDrive
maxFileLength = 998 // for 1 byte unicode characters
maxFileLength = 499 // for 2 byte unicode characters
maxFileLength = 332 // for 3 byte unicode characters
maxFileLength = 249 // for 4 byte unicode characters

Only noticed it now. You are running it on the wrong remote.. It should be:

rclone --config rclone-hot-storage.conf test info --check-length --check-base32768 s3:test_bucket

crypt does support everything. You want to test your underlaying remote you use to store encrypted files.

Yeah, makes sense, initially I didn't fully understand what the command does, it's all clear now. And it does support up to 255 characters:

rclone --config rclone-hot-storage.conf test info --check-length --check-base32768 s3:rclone-data

2024/10/05 17:52:03 NOTICE: S3 bucket rclone-data path rclone-test-info-fayiduv4/test-base32768: 0 differences found
2024/10/05 17:52:03 NOTICE: S3 bucket rclone-data path rclone-test-info-fayiduv4/test-base32768: 1028 matching files
// s3
maxFileLength = 255 // for 1 byte unicode characters
maxFileLength = 127 // for 2 byte unicode characters
maxFileLength = 85 // for 3 byte unicode characters
maxFileLength = 63 // for 4 byte unicode characters
base32768isOK = true // make sure maxFileLength for 2 byte unicode chars is the same as for 1 byte characters

An update. I created a new bucket and transferred my data there. 9 files reported Object name contains unsupported characters for the right reasons (long, non-Latin characters), which I have already renamed, and everything feels in a healthy spot. As for the 255 character limit, I took a look at the MinIO docs and it seems to be a known limitation:

Maximum length for object names 1024
Maximum length for '/' separated object name segment 255

So technically the problem is solved. The old bucket is still online if there's a reason to keep digging into the original problem (Rclone failing to upload some files that were ungracefully aborted), though. Should we treat it as an issue on MinIO's side and call it a day, or is there anything we can do that might benefit Rclone?
In any case, @kapitainsky and @asdffdsa, thank you for your time~

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.