How big is too big with chunker and hashing?

So I have a rpi4 that shares my rclone mount. and overall it works great with using a mergerfs system to push files up to google drive. Overall most of the files I would expect "real time" access to are under 2GB. It works great as is, however I just went to backup a DD image of a new laptop and the cloud provider is timing out for how long it takes my connection to complete the image upload (over 2 weeks and it throws a non informative message). Adding Chunker looks like it might be a solution to this and I like the idea of hashing with it, but I'm curious what size of Chunks would be reasonable with chunker. I see the default is 2GB, I was thinking more along the lines of like 60G or 150G but I'm afraid hashing chunks that large may create timing issues. Stuff over 60G is mostly for long term image archiving and would just be downloaded locally over a week or so if it needed to be restored.

I guess my specific question is how large of a chunk with hashing can be set reasonably without it becoming unstable/unreliable?

It probably depends on the provider, but I'd tend to go with smaller chunks rather than larger.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.