This option was introduced roughly 3 years ago, so I'm curious to know how well it's been tested at this point. I see that there are still open action items left regarding copying/moving, but since the maintainer is no longer active, I don't really expect them to be implemented anytime soon.
I'm mainly looking to decrease the time it takes to upload new chunked files to object storage, so the current limitations to copy/move aren't a huge issue for me. My only concern is data integrity as I would be planning to use this for long-term backups.
I use chunker extensively and all works well so far. IMO "experimental" label should be removed.
But always exercise limited trust and validate. I always run:
rclone check --download
for any data I care. Also as chunker does not support resume operations (actually it is more than that - rclone does not support) sometimes you might end up with orphaned chunks. Have a look at my post here how to manage it:
And why to use chunker with S3? Don't you think that you over engineer something simple?
Only reason today to use chunker is to bypass some cloud storages limitations how big file can be stored. You introduce additional point of failure element without any benefits.
It is your date so you can do what you want. But as you said you care about your data long term - then I could only say keep it as simple as only possible. Otherwise you only ask for problems.