Storj with S3 backend - StatusCode 500 - InternalError?

What is the problem you are having with rclone?

Uploading large backup files would sometimes fail with low level error after reaching 100%. Initial testing of various file sizes with just crypt and Storj succeeded, but then the 1.5TB backup file failed. Subsequent testing with chunker added to the setup failed on the very first file which previously succeeded. If this issue is due to transient network interruptions, is there any feature/parameter in rclone to improve resiliencey instead of re-starting an upload/chunk from the beginning?

Thanks in advance for any help!

Run the command 'rclone version' and share the full output of the command.

rclone v1.69.0

  • os/version: Microsoft Windows Server 2019 Standard 1809 (64 bit)
  • os/kernel: 10.0.17763.3650 (x86_64)
  • os/type: windows
  • os/arch: amd64
  • go/version: go1.23.4
  • go/linking: static
  • go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

Storj with S3 backend with crypt and chunker

The command you were trying to run (eg rclone copy /tmp remote:tmp)

C:\\rclone\\rclone.exe" "-vv" "--s3-disable-http2" "--bwlimit" "10M" "--config" "c:\\rclone\\rclone.conf" "--log-file=c:\\rclone\\logs\\server.txt" "copy" "D:\\Backups\\app\\server\\app - serverD2025-02-14T190042_7A79.vbk" "chunk:server/"

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

[chunk]
type = chunker
remote = cstorj:
chunk_size = 300Gi

[cstorj]
type = crypt
remote = storj:
password = XXX
password2 = XXX

[storj]
type = s3
provider = Storj
access_key_id = XXX
secret_access_key = XXX
endpoint = gateway.storjshare.io

A log from the command that you were trying to run with the -vv flag

2025/02/15 16:34:57 DEBUG : sjoko55ml2k7putc9t25q4i0uuucnd7hj3hbo5v7tbjf2mhj6mhtgc8svs3v4m75c8pcqnpu98mjpbamvjkgc62p4gn478qasbeqmvb29qt9vuk37fdkg4dl55mahp1ll9mh2007r8o8orlip591ak1e00: multipart upload wrote chunk 1490 with 67108864 bytes and etag "3b9fb3a9408326f38f66a31ed351731c"
2025/02/15 16:34:57 DEBUG : app - serverD2025-02-14T190042_7A79.vbk: multipart upload: starting chunk 1493 size 64Mi offset 93.312Gi/94.235Gi
2025/02/15 16:34:58 DEBUG : sjoko55ml2k7putc9t25q4i0uuucnd7hj3hbo5v7tbjf2mhj6mhtgc8svs3v4m75c8pcqnpu98mjpbamvjkgc62p4gn478qasbeqmvb29qt9vuk37fdkg4dl55mahp1ll9mh2007r8o8orlip591ak1e00: multipart upload wrote chunk 1491 with 67108864 bytes and etag "a46969ce4e875ebf6e45993b4657d7e1"
2025/02/15 16:34:58 DEBUG : app - serverD2025-02-14T190042_7A79.vbk: multipart upload: starting chunk 1494 size 64Mi offset 93.375Gi/94.235Gi
2025/02/15 16:35:00 DEBUG : sjoko55ml2k7putc9t25q4i0uuucnd7hj3hbo5v7tbjf2mhj6mhtgc8svs3v4m75c8pcqnpu98mjpbamvjkgc62p4gn478qasbeqmvb29qt9vuk37fdkg4dl55mahp1ll9mh2007r8o8orlip591ak1e00: multipart upload wrote chunk 1492 with 67108864 bytes and etag "b2506b4bbe8f274e5dc7b66203ef659a"
2025/02/15 16:35:00 DEBUG : app - serverD2025-02-14T190042_7A79.vbk: multipart upload: starting chunk 1495 size 64Mi offset 93.438Gi/94.235Gi
2025/02/15 16:35:14 DEBUG : sjoko55ml2k7putc9t25q4i0uuucnd7hj3hbo5v7tbjf2mhj6mhtgc8svs3v4m75c8pcqnpu98mjpbamvjkgc62p4gn478qasbeqmvb29qt9vuk37fdkg4dl55mahp1ll9mh2007r8o8orlip591ak1e00: multipart upload wrote chunk 1493 with 67108864 bytes and etag "8c0f98177e91118e86b8c13dcf8a7098"
2025/02/15 16:35:14 DEBUG : app - serverD2025-02-14T190042_7A79.vbk: multipart upload: starting chunk 1496 size 64Mi offset 93.500Gi/94.235Gi
2025/02/15 16:35:22 DEBUG : sjoko55ml2k7putc9t25q4i0uuucnd7hj3hbo5v7tbjf2mhj6mhtgc8svs3v4m75c8pcqnpu98mjpbamvjkgc62p4gn478qasbeqmvb29qt9vuk37fdkg4dl55mahp1ll9mh2007r8o8orlip591ak1e00: multipart upload wrote chunk 1494 with 67108864 bytes and etag "398145dd8ea95a567c7db10da5d5b931"
2025/02/15 16:35:23 DEBUG : app - serverD2025-02-14T190042_7A79.vbk: multipart upload: starting chunk 1497 size 64Mi offset 93.562Gi/94.235Gi
2025/02/15 16:35:23 DEBUG : sjoko55ml2k7putc9t25q4i0uuucnd7hj3hbo5v7tbjf2mhj6mhtgc8svs3v4m75c8pcqnpu98mjpbamvjkgc62p4gn478qasbeqmvb29qt9vuk37fdkg4dl55mahp1ll9mh2007r8o8orlip591ak1e00: multipart upload wrote chunk 1495 with 67108864 bytes and etag "54ef0931b4650e5abcdd45b532bed3e6"
2025/02/15 16:35:24 DEBUG : app - serverD2025-02-14T190042_7A79.vbk: multipart upload: starting chunk 1498 size 64Mi offset 93.625Gi/94.235Gi
2025/02/15 16:35:26 DEBUG : sjoko55ml2k7putc9t25q4i0uuucnd7hj3hbo5v7tbjf2mhj6mhtgc8svs3v4m75c8pcqnpu98mjpbamvjkgc62p4gn478qasbeqmvb29qt9vuk37fdkg4dl55mahp1ll9mh2007r8o8orlip591ak1e00: multipart upload wrote chunk 1496 with 67108864 bytes and etag "aca4d369eb266823ce4f4d4b8916e4b0"
2025/02/15 16:35:26 DEBUG : app - serverD2025-02-14T190042_7A79.vbk: multipart upload: starting chunk 1499 size 64Mi offset 93.688Gi/94.235Gi
2025/02/15 16:35:39 DEBUG : sjoko55ml2k7putc9t25q4i0uuucnd7hj3hbo5v7tbjf2mhj6mhtgc8svs3v4m75c8pcqnpu98mjpbamvjkgc62p4gn478qasbeqmvb29qt9vuk37fdkg4dl55mahp1ll9mh2007r8o8orlip591ak1e00: multipart upload wrote chunk 1497 with 67108864 bytes and etag "4644c878b54992e02b0b0128aca6ae83"
2025/02/15 16:35:40 DEBUG : app - serverD2025-02-14T190042_7A79.vbk: multipart upload: starting chunk 1500 size 64Mi offset 93.750Gi/94.235Gi
2025/02/15 16:35:48 DEBUG : sjoko55ml2k7putc9t25q4i0uuucnd7hj3hbo5v7tbjf2mhj6mhtgc8svs3v4m75c8pcqnpu98mjpbamvjkgc62p4gn478qasbeqmvb29qt9vuk37fdkg4dl55mahp1ll9mh2007r8o8orlip591ak1e00: multipart upload wrote chunk 1498 with 67108864 bytes and etag "d2f85065aefa8acaf8ad36d77044bc2b"
2025/02/15 16:35:49 DEBUG : app - serverD2025-02-14T190042_7A79.vbk: multipart upload: starting chunk 1501 size 64Mi offset 93.812Gi/94.235Gi
2025/02/15 16:35:49 DEBUG : sjoko55ml2k7putc9t25q4i0uuucnd7hj3hbo5v7tbjf2mhj6mhtgc8svs3v4m75c8pcqnpu98mjpbamvjkgc62p4gn478qasbeqmvb29qt9vuk37fdkg4dl55mahp1ll9mh2007r8o8orlip591ak1e00: multipart upload wrote chunk 1499 with 67108864 bytes and etag "abba41bd42e8f719abbc499d4ab1291f"
2025/02/15 16:35:49 DEBUG : app - serverD2025-02-14T190042_7A79.vbk: multipart upload: starting chunk 1502 size 64Mi offset 93.875Gi/94.235Gi
2025/02/15 16:35:50 INFO  : 
Transferred:   	  693.922 GiB / 694.212 GiB, 100%, 9.986 MiB/s, ETA 29s
Transferred:            0 / 1, 0%
Elapsed time:  19h45m59.7s
Transferring:
 * app - server…-02-14T190042_7A79.vbk: 99% /694.212Gi, 9.984Mi/s, 29s

2025/02/15 16:35:51 DEBUG : sjoko55ml2k7putc9t25q4i0uuucnd7hj3hbo5v7tbjf2mhj6mhtgc8svs3v4m75c8pcqnpu98mjpbamvjkgc62p4gn478qasbeqmvb29qt9vuk37fdkg4dl55mahp1ll9mh2007r8o8orlip591ak1e00: multipart upload wrote chunk 1500 with 67108864 bytes and etag "ebbbd54dff38cc60ef73d44f44d7a964"
2025/02/15 16:35:52 DEBUG : app - serverD2025-02-14T190042_7A79.vbk: multipart upload: starting chunk 1503 size 64Mi offset 93.938Gi/94.235Gi
2025/02/15 16:36:05 DEBUG : sjoko55ml2k7putc9t25q4i0uuucnd7hj3hbo5v7tbjf2mhj6mhtgc8svs3v4m75c8pcqnpu98mjpbamvjkgc62p4gn478qasbeqmvb29qt9vuk37fdkg4dl55mahp1ll9mh2007r8o8orlip591ak1e00: multipart upload wrote chunk 1501 with 67108864 bytes and etag "f2e6a9c76d35f9ad42337f1c078f49d4"
2025/02/15 16:36:06 DEBUG : app - serverD2025-02-14T190042_7A79.vbk: multipart upload: starting chunk 1504 size 64Mi offset 94Gi/94.235Gi
2025/02/15 16:36:14 DEBUG : sjoko55ml2k7putc9t25q4i0uuucnd7hj3hbo5v7tbjf2mhj6mhtgc8svs3v4m75c8pcqnpu98mjpbamvjkgc62p4gn478qasbeqmvb29qt9vuk37fdkg4dl55mahp1ll9mh2007r8o8orlip591ak1e00: multipart upload wrote chunk 1502 with 67108864 bytes and etag "153993db8213b2770f1e11067d2efbeb"
2025/02/15 16:36:14 DEBUG : app - serverD2025-02-14T190042_7A79.vbk: multipart upload: starting chunk 1505 size 64Mi offset 94.062Gi/94.235Gi
2025/02/15 16:36:14 DEBUG : sjoko55ml2k7putc9t25q4i0uuucnd7hj3hbo5v7tbjf2mhj6mhtgc8svs3v4m75c8pcqnpu98mjpbamvjkgc62p4gn478qasbeqmvb29qt9vuk37fdkg4dl55mahp1ll9mh2007r8o8orlip591ak1e00: multipart upload wrote chunk 1503 with 67108864 bytes and etag "b0dec8c680213e006c876bdb5bc11616"
2025/02/15 16:36:15 DEBUG : app - serverD2025-02-14T190042_7A79.vbk: multipart upload: starting chunk 1506 size 64Mi offset 94.125Gi/94.235Gi
2025/02/15 16:36:17 DEBUG : sjoko55ml2k7putc9t25q4i0uuucnd7hj3hbo5v7tbjf2mhj6mhtgc8svs3v4m75c8pcqnpu98mjpbamvjkgc62p4gn478qasbeqmvb29qt9vuk37fdkg4dl55mahp1ll9mh2007r8o8orlip591ak1e00: multipart upload wrote chunk 1504 with 67108864 bytes and etag "f95e36351104565b4d63037fc24bc8b9"
2025/02/15 16:36:17 DEBUG : app - serverD2025-02-14T190042_7A79.vbk: multipart upload: starting chunk 1507 size 48.307Mi offset 94.188Gi/94.235Gi
2025/02/15 16:36:31 DEBUG : sjoko55ml2k7putc9t25q4i0uuucnd7hj3hbo5v7tbjf2mhj6mhtgc8svs3v4m75c8pcqnpu98mjpbamvjkgc62p4gn478qasbeqmvb29qt9vuk37fdkg4dl55mahp1ll9mh2007r8o8orlip591ak1e00: multipart upload wrote chunk 1505 with 67108864 bytes and etag "4edc69bf50b0109ef9608e791d366686"
2025/02/15 16:36:35 DEBUG : sjoko55ml2k7putc9t25q4i0uuucnd7hj3hbo5v7tbjf2mhj6mhtgc8svs3v4m75c8pcqnpu98mjpbamvjkgc62p4gn478qasbeqmvb29qt9vuk37fdkg4dl55mahp1ll9mh2007r8o8orlip591ak1e00: multipart upload wrote chunk 1508 with 50653424 bytes and etag "0e6e0a4178d86c5d403e0e6490775fb4"
2025/02/15 16:36:37 DEBUG : sjoko55ml2k7putc9t25q4i0uuucnd7hj3hbo5v7tbjf2mhj6mhtgc8svs3v4m75c8pcqnpu98mjpbamvjkgc62p4gn478qasbeqmvb29qt9vuk37fdkg4dl55mahp1ll9mh2007r8o8orlip591ak1e00: multipart upload wrote chunk 1506 with 67108864 bytes and etag "38b3368ae4514f65d40feb747b4dd6e3"
2025/02/15 16:36:37 DEBUG : sjoko55ml2k7putc9t25q4i0uuucnd7hj3hbo5v7tbjf2mhj6mhtgc8svs3v4m75c8pcqnpu98mjpbamvjkgc62p4gn478qasbeqmvb29qt9vuk37fdkg4dl55mahp1ll9mh2007r8o8orlip591ak1e00: multipart upload wrote chunk 1507 with 67108864 bytes and etag "41eeb7274ec67c9fe6bd8aa4c0d0c0b5"
2025/02/15 16:36:38 DEBUG : sjoko55ml2k7putc9t25q4i0uuucnd7hj3hbo5v7tbjf2mhj6mhtgc8svs3v4m75c8pcqnpu98mjpbamvjkgc62p4gn478qasbeqmvb29qt9vuk37fdkg4dl55mahp1ll9mh2007r8o8orlip591ak1e00: multipart upload "7VMPziQRJCwMrdf6ZDRJBywzdqfo6q9zxwpMnk1DiREDhU1nJ46jHM1zKne4ZjVTZ9Uzw79sqsgsfMuY3yh3NS2ni9MgjsfYvKmh4QgbKVzTRe5zecdyhyYiAfqT1TN6VJuAzYtbKrsJU3xjE7sDZFUfWasTzi2rWypX1kyVb3ZjsjjiM1ejKeQ3qv7hkNzgBHuER1zc3RbGgP1qawwYGnUUSEvrnngxNXxCACyCr23aFtv7fkpGXm8dBZkanFs5ic8DbGcE2erf3MjJZkLEPyy6nD18QSwpsVoyMyTjdr7ztQz13WViKRshcJWtA3gAWHBG1NU7pBHe9KmKAto5e3ZPw63RLUEcfrYYQe9sF9CgVZUSVhCQCotsq1Ku8stKjX45G5ZFBHTMTzxgHjcoCbBFBdMFu8ZZQdmBNVmfFa4K" finished
2025/02/15 16:36:38 DEBUG : sjoko55ml2k7putc9t25q4i0uuucnd7hj3hbo5v7tbjf2mhj6mhtgc8svs3v4m75c8pcqnpu98mjpbamvjkgc62p4gn478qasbeqmvb29qt9vuk37fdkg4dl55mahp1ll9mh2007r8o8orlip591ak1e00: Multipart upload Etag: 2afeb5343c8c4f31a46d305c2bb592eb-1508 OK
2025/02/15 16:36:50 INFO  : 
Transferred:   	  694.381 GiB / 994.212 GiB, 70%, 4.552 MiB/s, ETA 18h44m15s
Checks:                 0 / 1, 0%
Transferred:            0 / 2, 0%
Elapsed time:  19h46m59.7s
Checking:

Transferring:
 * app - server…-02-14T190042_7A79.vbk:100% /694.212Gi, 4.552Mi/s, -
 * app - server…clone_chunk.001_pk4ccf:  0% /300Gi, 0/s, -

2025/02/15 16:37:50 INFO  : 
Transferred:   	  694.381 GiB / 994.212 GiB, 70%, 96.991 KiB/s, ETA 5w2d12h
Checks:                 0 / 1, 0%
Transferred:            0 / 2, 0%
Elapsed time:  19h47m59.7s
Checking:

Transferring:
 * app - server…-02-14T190042_7A79.vbk:100% /694.212Gi, 96.991Ki/s, -
 * app - server…clone_chunk.001_pk4ccf:  0% /300Gi, 0/s, -

**2025/02/15 16:38:43 DEBUG : pacer: low level retry 1/2 (error operation error S3: CopyObject, exceeded maximum number of attempts, 10, https response error StatusCode: 500, RequestID: 182482C12249F5D7, HostID: , api error InternalError: We encountered an internal error, please try again.: cause(uplink: metaclient: internal error))**
2025/02/15 16:38:43 DEBUG : pacer: Rate limited, increasing sleep to 10ms
2025/02/15 16:38:50 INFO  : 
Transferred:   	  694.381 GiB / 994.212 GiB, 70%, 2.018 KiB/s, ETA 4y48w6d
Checks:                 0 / 1, 0%
Transferred:            0 / 2, 0%
Elapsed time:  19h48m59.7s
Checking:

Transferring:
 * app - server…-02-14T190042_7A79.vbk:100% /694.212Gi, 2.018Ki/s, -
 * app - server…clone_chunk.001_pk4ccf:  0% /300Gi, 0/s, -

2025/02/15 16:39:50 INFO  : 
Transferred:   	  694.381 GiB / 994.212 GiB, 70%, 43 B/s, ETA 237y18w2h
Checks:                 0 / 1, 0%
Transferred:            0 / 2, 0%
Elapsed time:  19h49m59.7s
Checking:

Transferring:
 * app - server…-02-14T190042_7A79.vbk:100% /694.212Gi, 43/s, -
 * app - server…clone_chunk.001_pk4ccf:  0% /300Gi, 0/s, -

**2025/02/15 16:40:49 DEBUG : pacer: low level retry 2/2 (error operation error S3: CopyObject, exceeded maximum number of attempts, 10, https response error StatusCode: 500, RequestID: 182482DE63448A08, HostID: , api error InternalError: We encountered an internal error, please try again.: cause(uplink: metaclient: internal error))**
**2025/02/15 16:40:49 DEBUG : pacer: Rate limited, increasing sleep to 20ms**
**2025/02/15 16:40:49 DEBUG : app - serverD2025-02-14T190042_7A79.vbk.rclone_chunk.001_pk4ccf: Received error: operation error S3: CopyObject, exceeded maximum number of attempts, 10, https response error StatusCode: 500, RequestID: 182482DE63448A08, HostID: , api error InternalError: We encountered an internal error, please try again.: cause(uplink: metaclient: internal error) - low level retry 0/10**
2025/02/15 16:40:50 INFO  : 
Transferred:   	  694.381 GiB / 994.212 GiB, 70%, 0 B/s, ETA -
Checks:                 0 / 1, 0%
Transferred:            0 / 2, 0%
Elapsed time:  19h50m59.7s
Checking:

Transferring:
 * app - server…-02-14T190042_7A79.vbk:100% /694.212Gi, 0/s, -
 * app - server…clone_chunk.001_pk4ccf:  0% /300Gi, 0/s, -

2025/02/15 16:41:50 INFO  : 
Transferred:   	  694.381 GiB / 994.212 GiB, 70%, 0 B/s, ETA -
Checks:                 0 / 1, 0%
Transferred:            0 / 2, 0%
Elapsed time:  19h51m59.7s
Checking:

Transferring:
 * app - server…-02-14T190042_7A79.vbk:100% /694.212Gi, 0/s, -
 * app - server…clone_chunk.001_pk4ccf:  0% /300Gi, 0/s, -

2025/02/15 16:42:49 DEBUG : pacer: low level retry 1/2 (error operation error S3: CopyObject, exceeded maximum number of attempts, 10, https response error StatusCode: 500, RequestID: 182482FA515D732B, HostID: , api error InternalError: We encountered an internal error, please try again.: cause(uplink: metaclient: internal error))
2025/02/15 16:42:49 DEBUG : pacer: Rate limited, increasing sleep to 40ms
2025/02/15 16:42:50 INFO  : 
Transferred:   	  694.381 GiB / 994.212 GiB, 70%, 0 B/s, ETA -
Checks:                 0 / 1, 0%
Transferred:            0 / 2, 0%
Elapsed time:  19h52m59.7s
Checking:

Transferring:
 * app - server…-02-14T190042_7A79.vbk:100% /694.212Gi, 0/s, -
 * app - server…clone_chunk.001_pk4ccf:  0% /300Gi, 0/s, -

2025/02/15 16:43:50 INFO  : 
Transferred:   	  694.381 GiB / 994.212 GiB, 70%, 0 B/s, ETA -
Checks:                 0 / 1, 0%
Transferred:            0 / 2, 0%
Elapsed time:  19h53m59.7s
Checking:

Transferring:
 * app - server…-02-14T190042_7A79.vbk:100% /694.212Gi, 0/s, -
 * app - server…clone_chunk.001_pk4ccf:  0% /300Gi, 0/s, -

2025/02/15 16:44:50 INFO  : 
Transferred:   	  694.381 GiB / 994.212 GiB, 70%, 0 B/s, ETA -
Checks:                 0 / 1, 0%
Transferred:            0 / 2, 0%
Elapsed time:  19h54m59.7s
Checking:

Transferring:
 * app - server…-02-14T190042_7A79.vbk:100% /694.212Gi, 0/s, -
 * app - server…clone_chunk.001_pk4ccf:  0% /300Gi, 0/s, -

2025/02/15 16:44:54 DEBUG : pacer: low level retry 2/2 (error operation error S3: CopyObject, exceeded maximum number of attempts, 10, https response error StatusCode: 500, RequestID: 18248317893ACB12, HostID: , api error InternalError: We encountered an internal error, please try again.: cause(uplink: metaclient: internal error))
2025/02/15 16:44:54 DEBUG : pacer: Rate limited, increasing sleep to 80ms
2025/02/15 16:44:54 DEBUG : app - serverD2025-02-14T190042_7A79.vbk.rclone_chunk.001_pk4ccf: Received error: operation error S3: CopyObject, exceeded maximum number of attempts, 10, https response error StatusCode: 500, RequestID: 18248317893ACB12, HostID: , api error InternalError: We encountered an internal error, please try again.: cause(uplink: metaclient: internal error) - low level retry 1/10
2025/02/15 16:45:50 INFO  : 
Transferred:   	  694.381 GiB / 994.212 GiB, 70%, 0 B/s, ETA -
Checks:                 0 / 1, 0%
Transferred:            0 / 2, 0%
Elapsed time:  19h55m59.7s
Checking:

Transferring:
 * app - server…-02-14T190042_7A79.vbk:100% /694.212Gi, 0/s, -
 * app - server…clone_chunk.001_pk4ccf:  0% /300Gi, 0/s, -

2025/02/15 16:46:50 INFO  : 
Transferred:   	  694.381 GiB / 994.212 GiB, 70%, 0 B/s, ETA -
Checks:                 0 / 1, 0%
Transferred:            0 / 2, 0%
Elapsed time:  19h56m59.7s
Checking:

Transferring:
 * app - server…-02-14T190042_7A79.vbk:100% /694.212Gi, 0/s, -
 * app - server…clone_chunk.001_pk4ccf:  0% /300Gi, 0/s, -

2025/02/15 16:47:01 DEBUG : pacer: low level retry 1/2 (error operation error S3: CopyObject, exceeded maximum number of attempts, 10, https response error StatusCode: 500, RequestID: 1824833535E2931B, HostID: , api error InternalError: We encountered an internal error, please try again.: cause(uplink: metaclient: internal error))
2025/02/15 16:47:01 DEBUG : pacer: Rate limited, increasing sleep to 160ms
2025/02/15 16:47:50 INFO  : 
Transferred:   	  694.381 GiB / 994.212 GiB, 70%, 0 B/s, ETA -
Checks:                 0 / 1, 0%
Transferred:            0 / 2, 0%
Elapsed time:  19h57m59.7s
Checking:

Transferring:
 * app - server…-02-14T190042_7A79.vbk:100% /694.212Gi, 0/s, -
 * app - server…clone_chunk.001_pk4ccf:  0% /300Gi, 0/s, -

2025/02/15 16:48:50 INFO  : 
Transferred:   	  694.381 GiB / 994.212 GiB, 70%, 0 B/s, ETA -
Checks:                 0 / 1, 0%
Transferred:            0 / 2, 0%
Elapsed time:  19h58m59.7s
Checking:

Transferring:
 * app - server…-02-14T190042_7A79.vbk:100% /694.212Gi, 0/s, -
 * app - server…clone_chunk.001_pk4ccf:  0% /300Gi, 0/s, -

2025/02/15 16:48:59 DEBUG : pacer: low level retry 2/2 (error operation error S3: CopyObject, exceeded maximum number of attempts, 10, https response error StatusCode: 500, RequestID: 182483508E37988C, HostID: , api error InternalError: We encountered an internal error, please try again.: cause(uplink: metaclient: internal error))
2025/02/15 16:48:59 DEBUG : pacer: Rate limited, increasing sleep to 320ms
2025/02/15 16:48:59 DEBUG : app - serverD2025-02-14T190042_7A79.vbk.rclone_chunk.001_pk4ccf: Received error: operation error S3: CopyObject, exceeded maximum number of attempts, 10, https response error StatusCode: 500, RequestID: 182483508E37988C, HostID: , api error InternalError: We encountered an internal error, please try again.: cause(uplink: metaclient: internal error) - low level retry 2/10
2025/02/15 16:49:50 INFO  : 
Transferred:   	  694.381 GiB / 994.212 GiB, 70%, 0 B/s, ETA -
Checks:                 0 / 1, 0%
Transferred:            0 / 2, 0%
Elapsed time:  19h59m59.7s
Checking:

Transferring:
 * app - server…-02-14T190042_7A79.vbk:100% /694.212Gi, 0/s, -
 * app - server…clone_chunk.001_pk4ccf:  0% /300Gi, 0/s, -

2025/02/15 16:50:50 INFO  : 
Transferred:   	  694.381 GiB / 994.212 GiB, 70%, 0 B/s, ETA -
Checks:                 0 / 1, 0%
Transferred:            0 / 2, 0%
Elapsed time:     20h59.7s
Checking:

Transferring:
 * app - server…-02-14T190042_7A79.vbk:100% /694.212Gi, 0/s, -
 * app - server…clone_chunk.001_pk4ccf:  0% /300Gi, 0/s, -

2025/02/15 16:51:15 DEBUG : pacer: low level retry 1/2 (error operation error S3: CopyObject, exceeded maximum number of attempts, 10, https response error StatusCode: 500, RequestID: 18248370553020BF, HostID: , api error InternalError: We encountered an internal error, please try again.: cause(uplink: metaclient: internal error))
2025/02/15 16:51:15 DEBUG : pacer: Rate limited, increasing sleep to 640ms
2025/02/15 16:51:50 INFO  : 
Transferred:   	  694.381 GiB / 994.212 GiB, 70%, 0 B/s, ETA -
Checks:                 0 / 1, 0%
Transferred:            0 / 2, 0%
Elapsed time:   20h1m59.7s
Checking:

Transferring:
 * app - server…-02-14T190042_7A79.vbk:100% /694.212Gi, 0/s, -
 * app - server…clone_chunk.001_pk4ccf:  0% /300Gi, 0/s, -

2025/02/15 16:52:50 INFO  : 
Transferred:   	  694.381 GiB / 994.212 GiB, 70%, 0 B/s, ETA -
Checks:                 0 / 1, 0%
Transferred:            0 / 2, 0%
Elapsed time:   20h2m59.7s
Checking:

Transferring:
 * app - server…-02-14T190042_7A79.vbk:100% /694.212Gi, 0/s, -
 * app - server…clone_chunk.001_pk4ccf:  0% /300Gi, 0/s, -

2025/02/15 16:53:19 DEBUG : pacer: low level retry 2/2 (error operation error S3: CopyObject, exceeded maximum number of attempts, 10, https response error StatusCode: 500, RequestID: 1824838D2D078E19, HostID: , api error InternalError: We encountered an internal error, please try again.: cause(uplink: metaclient: internal error))
2025/02/15 16:53:19 DEBUG : pacer: Rate limited, increasing sleep to 1.28s
2025/02/15 16:53:19 DEBUG : app - serverD2025-02-14T190042_7A79.vbk.rclone_chunk.001_pk4ccf: Received error: operation error S3: CopyObject, exceeded maximum number of attempts, 10, https response error StatusCode: 500, RequestID: 1824838D2D078E19, HostID: , api error InternalError: We encountered an internal error, please try again.: cause(uplink: metaclient: internal error) - low level retry 3/10
2025/02/15 16:53:50 INFO  : 
Transferred:   	  694.381 GiB / 994.212 GiB, 70%, 0 B/s, ETA -
Checks:                 0 / 1, 0%
Transferred:            0 / 2, 0%
Elapsed time:   20h3m59.7s
Checking:

Transferring:
 * app - server…-02-14T190042_7A79.vbk:100% /694.212Gi, 0/s, -
 * app - server…clone_chunk.001_pk4ccf:  0% /300Gi, 0/s, -

2025/02/15 16:54:50 INFO  : 
Transferred:   	  694.381 GiB / 994.212 GiB, 70%, 0 B/s, ETA -
Checks:                 0 / 1, 0%
Transferred:            0 / 2, 0%
Elapsed time:   20h4m59.7s
Checking:

Transferring:
 * app - server…-02-14T190042_7A79.vbk:100% /694.212Gi, 0/s, -
 * app - server…clone_chunk.001_pk4ccf:  0% /300Gi, 0/s, -

2025/02/15 16:55:19 DEBUG : pacer: low level retry 1/2 (error operation error S3: CopyObject, exceeded maximum number of attempts, 10, https response error StatusCode: 500, RequestID: 182483A934AA93C7, HostID: , api error InternalError: We encountered an internal error, please try again.: cause(uplink: metaclient: internal error))
2025/02/15 16:55:19 DEBUG : pacer: Rate limited, increasing sleep to 2s
2025/02/15 16:55:50 INFO  : 
Transferred:   	  694.381 GiB / 994.212 GiB, 70%, 0 B/s, ETA -
Checks:                 0 / 1, 0%
Transferred:            0 / 2, 0%
Elapsed time:   20h5m59.7s
Checking:

Transferring:
 * app - server…-02-14T190042_7A79.vbk:100% /694.212Gi, 0/s, -
 * app - server…clone_chunk.001_pk4ccf:  0% /300Gi, 0/s, -

2025/02/15 16:56:50 INFO  : 
Transferred:   	  694.381 GiB / 994.212 GiB, 70%, 0 B/s, ETA -
Checks:                 0 / 1, 0%
Transferred:            0 / 2, 0%
Elapsed time:   20h6m59.7s
Checking:

Transferring:
 * app - server…-02-14T190042_7A79.vbk:100% /694.212Gi, 0/s, -
 * app - server…clone_chunk.001_pk4ccf:  0% /300Gi, 0/s, -

2025/02/15 16:57:24 DEBUG : pacer: low level retry 2/2 (error operation error S3: CopyObject, exceeded maximum number of attempts, 10, https response error StatusCode: 500, RequestID: 182483C6266F8EE5, HostID: , api error InternalError: We encountered an internal error, please try again.: cause(uplink: metaclient: internal error))
2025/02/15 16:57:24 DEBUG : app - serverD2025-02-14T190042_7A79.vbk.rclone_chunk.001_pk4ccf: Received error: operation error S3: CopyObject, exceeded maximum number of attempts, 10, https response error StatusCode: 500, RequestID: 182483C6266F8EE5, HostID: , api error InternalError: We encountered an internal error, please try again.: cause(uplink: metaclient: internal error) - low level retry 4/10
2025/02/15 16:57:50 INFO  : 
Transferred:   	  694.381 GiB / 994.212 GiB, 70%, 0 B/s, ETA -
Checks:                 0 / 1, 0%
Transferred:            0 / 2, 0%
Elapsed time:   20h7m59.7s
Checking:

Transferring:
 * app - server…-02-14T190042_7A79.vbk:100% /694.212Gi, 0/s, -
 * app - server…clone_chunk.001_pk4ccf:  0% /300Gi, 0/s, -

2025/02/15 16:58:50 INFO  : 
Transferred:   	  694.381 GiB / 994.212 GiB, 70%, 0 B/s, ETA -
Checks:                 0 / 1, 0%
Transferred:            0 / 2, 0%
Elapsed time:   20h8m59.7s
Checking:

Transferring:
 * app - server…-02-14T190042_7A79.vbk:100% /694.212Gi, 0/s, -
 * app - server…clone_chunk.001_pk4ccf:  0% /300Gi, 0/s, -

2025/02/15 16:59:23 DEBUG : pacer: low level retry 1/2 (error operation error S3: CopyObject, exceeded maximum number of attempts, 10, https response error StatusCode: 500, RequestID: 182483E1B88C9FE2, HostID: , api error InternalError: We encountered an internal error, please try again.: cause(uplink: metaclient: internal error))
2025/02/15 16:59:26 DEBUG : pacer: Reducing sleep to 1.5s
2025/02/15 16:59:26 ERROR : app - serverD2025-02-14T190042_7A79.vbk.rclone_chunk.001_pk4ccf: Failed to copy: operation error S3: CopyObject, failed to get rate limit token, retry quota exceeded, 0 available, 5 requested
2025/02/15 16:59:26 ERROR : app - serverD2025-02-14T190042_7A79.vbk.rclone_chunk.001_pk4ccf: Not deleting source as copy failed: operation error S3: CopyObject, failed to get rate limit token, retry quota exceeded, 0 available, 5 requested
2025/02/15 16:59:26 DEBUG : pacer: Reducing sleep to 1.125s
2025/02/15 16:59:29 DEBUG : pacer: Reducing sleep to 843.75ms
2025/02/15 16:59:29 ERROR : app - serverD2025-02-14T190042_7A79.vbk.rclone_chunk.001_pk4ccf: Failed to copy: operation error S3: CopyObject, failed to get rate limit token, retry quota exceeded, 0 available, 5 requested
2025/02/15 16:59:29 ERROR : app - serverD2025-02-14T190042_7A79.vbk.rclone_chunk.001_pk4ccf: Not deleting source as copy failed: operation error S3: CopyObject, failed to get rate limit token, retry quota exceeded, 0 available, 5 requested
2025/02/15 16:59:30 DEBUG : pacer: Reducing sleep to 632.8125ms
2025/02/15 16:59:30 DEBUG : pacer: Reducing sleep to 474.609375ms
2025/02/15 16:59:31 DEBUG : pacer: Reducing sleep to 355.957031ms
2025/02/15 16:59:31 ERROR : app - serverD2025-02-14T190042_7A79.vbk: Failed to copy: operation error S3: CopyObject, failed to get rate limit token, retry quota exceeded, 0 available, 5 requested
2025/02/15 16:59:31 ERROR : Attempt 1/3 failed with 2 errors and: operation error S3: CopyObject, failed to get rate limit token, retry quota exceeded, 0 available, 5 requested
2025/02/15 16:59:31 DEBUG : pacer: Reducing sleep to 266.967773ms
2025/02/15 16:59:31 DEBUG : app - serverD2025-02-14T190042_7A79.vbk: Need to transfer - File not found at Destination
2025/02/15 16:59:31 DEBUG : pacer: Reducing sleep to 200.225829ms
2025/02/15 16:59:31 DEBUG : app - serverD2025-02-14T190042_7A79.vbk: skip slow MD5 on source file, hashing in-transit
2025/02/15 16:59:32 DEBUG : pacer: Reducing sleep to 150.169371ms
2025/02/15 16:59:32 DEBUG : pacer: Reducing sleep to 112.627028ms
2025/02/15 16:59:32 DEBUG : slfvsf0666h8srvo5flta3tjv7j3i989dh35173o525crlcrm3vo0mkskj5iaclb50ispf26cimb53ssqgb4ud32aqgaunovqdp7pjg9c4jjsu0gfc056da6f1nrpl9isgqt9b448pt6fapdeb44gdrn1g: open chunk writer: started multipart upload: 7VMPziQRJCwMrdf6ZDRJBywzdqfo6q9zxwpMnk1DiREDifR13Rjk5WuAJX3cJFHsDnyagcmdpjYpjFPXar5atLnxa9KWSmPhbDwoNyTBpF4Hk3GbiUS4jsKCGARvdUCDE9ZMBVf9BcqdjL3EVpRW5nRYB2wZf1zG7vo3k1RfgBUjbo2N7yAfdPdnk68sMpQixUGQN8mhHERmuQNGSsqTuKLUzTeKFcczdDYp81kMq7JfFXsNXmhSSbatmzNa45st8ays79HzfH262dNSf59oXgDEAs1ZNrnF9mRiLSEasZhPGeuYLYXfhff7h2PmXyrMzkysFbAg4TBFkUy5Z9wz6mKm6TG92s6cnFuVeKibQZoCJbhN4FMHJmJnX9ZQ7a33uwY9TBsrcGDbnFwDNRwVasDP7qfuXNkVHiwMunQyGE6K
2025/02/15 16:59:32 DEBUG : app - serverD2025-02-14T190042_7A79.vbk: multipart upload: starting chunk 0 size 64Mi offset 0/300.073Gi
2025/02/15 16:59:33 DEBUG : app - serverD2025-02-14T190042_7A79.vbk: multipart upload: starting chunk 1 size 64Mi offset 64Mi/300.073Gi
2025/02/15 16:59:33 DEBUG : app - serverD2025-02-14T190042_7A79.vbk: multipart upload: starting chunk 2 size 64Mi offset 128Mi/300.073Gi
2025/02/15 16:59:34 DEBUG : app - serverD2025-02-14T190042_7A79.vbk: multipart upload: starting chunk 3 size 64Mi offset 192Mi/300.073Gi
2025/02/15 16:59:50 INFO  : 
Transferred:   	  694.552 GiB / 1.356 TiB, 50%, 6.619 MiB/s, ETA 1d5h49m
Checks:                 2 / 2, 100%
Transferred:            0 / 1, 0%
Elapsed time:   20h9m59.7s
Transferring:
 * app - server…-02-14T190042_7A79.vbk:  0% /694.212Gi, 9.480Mi/s, 20h49m30s

~20MB txt file at file.io/limewire.

welcome to the forum,

that is an error from the server, rclone is simply logging that event.
for a deeper look at the api calls, --dump=headers --retries=1

i have been uploading large .vib|.vbk file to s3 for 5+ years, never really had issues.


imho, for critical backup files, like veeam, the following cannot be trusted

  • chunker is alpha/beta.
  • crypt cannot verify file transfers using checksums.
    given that veeam backups can be crypted, not sure the need for rclone crypt.

if an upload of a file failed, then the entire file has to be uploaded again.

cause(uplink: metaclient: internal error))
given the error appears to be server-side, not much rclone can do about it.
contact the service provider.

Thanks for the response and suggestions.

After a few more test runs, it seems like its only failing to upload when using crypt with the ~1.5TB file. The ~700GB and smaller files don't have any issues.

The ~1.5TB file succeeded finally by not triggering multipart upload via setting chunker's chunk_size = 4.75Gi and storj's S3 upload_cutoff = 5Gi. Rclone still did md5 on each chunk so everything seems okay for now. Thanks again!