Are chunker norename transactions still considered experimental?

This option was introduced roughly 3 years ago, so I'm curious to know how well it's been tested at this point. I see that there are still open action items left regarding copying/moving, but since the maintainer is no longer active, I don't really expect them to be implemented anytime soon.

I'm mainly looking to decrease the time it takes to upload new chunked files to object storage, so the current limitations to copy/move aren't a huge issue for me. My only concern is data integrity as I would be planning to use this for long-term backups.

I use chunker extensively and all works well so far. IMO "experimental" label should be removed.

But always exercise limited trust and validate. I always run:

rclone check --download

for any data I care. Also as chunker does not support resume operations (actually it is more than that - rclone does not support) sometimes you might end up with orphaned chunks. Have a look at my post here how to manage it:

BTW - why you want to use norename transactions? What is your remote?

S3. The rename basically takes as long as the actual upload

And why to use chunker with S3? Don't you think that you over engineer something simple?

Only reason today to use chunker is to bypass some cloud storages limitations how big file can be stored. You introduce additional point of failure element without any benefits.

It is your date so you can do what you want. But as you said you care about your data long term - then I could only say keep it as simple as only possible. Otherwise you only ask for problems.

I am dealing with files larger than 5TiB.

For anyone reading this, the following (unreleased) change does help alleviate some of the problem without having to use norename transactions:

Before:

[s3]
chunk_size = 64M
upload_concurrency = 8

[chunker]
remote = s3:
chunk_size = 8G

---------------------------

$ dd if=/dev/urandom iflag=fullblock of=24GB.bin bs=64M count=384 status=progress
25769803776 bytes (26 GB, 24 GiB) copied, 284 s, 90.8 MB/s
384+0 records in
384+0 records out
25769803776 bytes (26 GB, 24 GiB) copied, 580.81 s, 44.4 MB/s

After:

[s3]
chunk_size = 64M
upload_concurrency = 8
copy_cutoff = 1G

[chunker]
remote = s3:
chunk_size = 8G

---------------------------

$ dd if=/dev/urandom iflag=fullblock of=24GB.bin bs=64M count=384 status=progress
25702694912 bytes (26 GB, 24 GiB) copied, 285 s, 90.1 MB/s
384+0 records in
384+0 records out
25769803776 bytes (26 GB, 24 GiB) copied, 329.35 s, 78.2 MB/s