Rclone - Backblaze B2 Cloud storage issues

Once uploading to B2 Cloud storage, I am getting a lot of errors / issues. As a result the upload time to increase.

2019/10/02 13:11:11 DEBUG : pacer: low level retry 1/10 (error incident id 62096cb3345345-6287c55e260a (500 internal_error))

2019/10/02 13:11:16 DEBUG : 2019-10-02/sample.com: Error sending chunk 256 (retry=true): incident id 62096cb3345345-50235b5e7601 (500 internal_error): &api.Error{Status:500, Code:"internal_error", Message:"incident id 62096cb3345345-50235b5e7601"}
2019/10/02 13:11:16 DEBUG : 2019-10-02/sample.com: Clearing part upload URL because of error: incident id 62096cb3345345-50235b5e7601 (500 internal_error)
2019/10/02 13:11:16 DEBUG : pacer: low level retry 1/10 (error incident id 62096cb3345345-50235b5e7601 (500 internal_error))
2019/10/02 13:11:16 DEBUG : 2019-10-02/sample.com: Error sending chunk 277 (retry=true): incident id 62096cb3345345-8eb36d282213 (500 internal_error): &api.Error{Status:500, Code:"internal_error", Message:"incident id 62096cb3345345-8eb36d282213"}
2019/10/02 13:11:16 DEBUG : 2019-10-02/sample.com: Clearing part upload URL because of error: incident id 62096cb3345345-8eb36d282213 (500 internal_error)
2019/10/02 13:11:16 DEBUG : pacer: low level retry 1/10 (error incident id 62096cb3345345-8eb36d282213 (500 internal_error))
2019/10/02 13:15:13 DEBUG : pacer: Reducing sleep to 2m30s
2019/10/02 13:20:12 DEBUG : 2019-10-02/sample.com: Sending chunk 270 length 100663296
2019/10/02 13:22:42 DEBUG : 2019-10-02/sample.com: Sending chunk 70 length 100663296
2019/10/02 13:25:12 DEBUG : 2019-10-02/sample.com: Sending chunk 221 length 100663296
2019/10/02 13:27:42 DEBUG : 2019-10-02/sample.com: Sending chunk 239 length 100663296
2019/10/02 13:30:12 DEBUG : 2019-10-02/sample.com: Sending chunk 242 length 100663296
2019/10/02 13:32:42 DEBUG : 2019-10-02/sample.com: Sending chunk 266 length 100663296
2019/10/02 13:35:12 DEBUG : 2019-10-02/sample.com: Sending chunk 284 length 100663296

Which rclone version are you using? Try the latest release?

rclone v1.47.0

  • os/arch: linux/amd64
  • go version: go1.11.5

It might be worth giving the latest release a try, though I think these sort of errors are caused by backblaze. The B2 protocol is designed so that servers can fail and rclone will pickup the upload with a different server which is what happened here. So in some ways I think it is normal operation for b2.

Thank you very much for your efforts and such nice tool, updated the version and seems better performance.

1 Like

Also if you could assist me, I handle daily backups with files more than 400GB for upload. I am using your tool with options below
--b2-upload-cutoff=400M
--transfers 100
--b2-disable-checksum
--fast-list

The server uploading the files is high performance, reaching 950Mbps speed and we can handle CPU & memory, what could do to reduce total upload time? Thank you once again.

What sort of speeds are you getting at the moment? If you run rclone with -v you'll get stats printed every minute and at the end of the transfer.

What sort of files are you uploading - a small number of big files or a big number of small files?

Can you try: https://www.backblaze.com/speedtest/ from the server?

Uploading small number of big files, we are getting top of our network speed but would like to know if I am misusing any options from your tool.

I think these options look fine. You can probably achieve max speed with less transfers, say 32 or 16.

Personally I wouldn't use --b2-disable-checksum as I enjoy having checksums for data, but it does speed things up as the checksum needs to be calculated in advance.

1 Like

Hi, a new issue came up after months of usage.
Rclone log:
ERROR : Attempt 3/3 failed with 3 errors and: "2019-11-24/web.tar" too big (1059532666880 bytes) makes too many parts 10526 > 10000 - increase --b2-chunk-size

Rclone cmd: --b2-upload-cutoff=400M --transfers 16 --b2-disable-checksum -vv --progress --fast-list

Can you please assist? Thank you!

Large files are uploaded in chunks with b2.

There can be at most 10000 chunks and the default chunk size is 100,000,000 bytes.

To upload a file that big (1 TB) you will need increase --b2-chunk-size say to --b2-chunk-size 200M.

Note that chunks are buffered in memory so you don't want to make it too big.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.