I've been running in to a scenario where I'm hitting b2's 10,000 chunk per file limit, and while researching it and manually tweaking my chunk size, I noted that the s3 backend documentation says "Rclone will automatically increase the chunk size when uploading a large file of known size to stay below the 10,000 chunks limit."
Does the b2 backend have that feature? If not, does anyone think it could be useful? I have scenarios where I have tens of thousands of small files, but a few 1 TB+ files, and being able to dynamically change the chunk size to fit in the 10,000 limit might solve some of the manual tweaks I do.