B2 backend dynamically increasing chunk size?

I've been running in to a scenario where I'm hitting b2's 10,000 chunk per file limit, and while researching it and manually tweaking my chunk size, I noted that the s3 backend documentation says "Rclone will automatically increase the chunk size when uploading a large file of known size to stay below the 10,000 chunks limit."

Does the b2 backend have that feature? If not, does anyone think it could be useful? I have scenarios where I have tens of thousands of small files, but a few 1 TB+ files, and being able to dynamically change the chunk size to fit in the 10,000 limit might solve some of the manual tweaks I do.

Thanks!

I don't see the same code in the b2.go file in github. You should submit this as a feature request here https://github.com/rclone/rclone/issues

No it doesn't.

The main disadvantage of it would be rclone using more memory as it buffers the chunks in ram.

It is probably a good idea though.

1 Like

I posted it as a feature request on Github.

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.