Rclone can only work on the file level unfortunately.
That's not really rclone's fault, but more of a limitation of the cloud systems, as they themselves work on a file-level.
If you wanted to have something akin to block-level incremental backup you would need to use some kind of software that could render the block-level changes to a file - which you could then upload normally. Several of these files could then be worked backwards to "go back in time" in a backup without needing to fully update changed files each time. You'd trade more CPU power needed to recreate the backup for bandwidth savings.
I would not be surprised if such functionality exists somewhere, but I also wouldn't expect it to be standard. You would have to do a little research I imagine.
And if you do find something like this - by all means do share it with the community as it would be useful to know. I would personally be interested in that too.
I've been using Duplicacy (CLI version) since last year and I'm very pleased with the performance, consistency and reliability.
It splits the files into chunks and sends only the modified "pieces". Chunks can be configured to have fixed or variable sizes, each type having better results depending on the file type and use case. This setting is per storage.
It creates backups in such a way that each incremental backup is also a full snapshot (which in its nomenclature is a "revision") that can be restored independently of the others.
The setup is a little hard, and the nomenclature is sometimes confusing, but it is an excellent software, I don't know another one with the same performance (and I tested several: Arq, Duplicati, restic, borg, and others)
I currently use Rclone to move / copy files and Duplicacy for backup.