Is rclone support increment backup base on chunk

rclone -V
rclone v1.49.1

  • os/arch: windows/amd64
  • go version: go1.12.3

Windows 2016, 64 bit

Google Drive

I has file blank.txt size 15G
when I first run rclone its sync wholly

rclone --verbose --progress sync "e:\backups\" gdrive:backup
2019-10-09 13:55:09 INFO  : Google drive root 'backup': Waiting for checks to finish
2019-10-09 13:55:09 INFO  : Google drive root 'backup': Waiting for transfers to finish
2019-10-09 14:48:17 INFO  : blank.txt: Copied (new)
2019-10-09 14:48:17 INFO  : Waiting for deletions to finish
Transferred:       13.970G / 13.970 GBytes, 100%, 4.486 MBytes/s, ETA 0s
Errors:                 0
Checks:                32 / 32, 100%
Transferred:            1 / 1, 100%
Elapsed time:     53m8.7s
2019/10/09 14:48:17 INFO  :slight_smile: ransferred:       13.970G / 13.970 GBytes, 100%, 4.486
MBytes/s, ETA 0s
Errors:                 0
Checks:                32 / 32, 100%
Transferred:            1 / 1, 100%
Elapsed time:     53m8.7s
ls
09.10.2019     13:54    15000000000 blank.txt

then I add some bytes to end of file

echo 1 >> .\blank.txt
ls
09.10.2019     16:35    15000000006 blank.txt

if I run rclone second time its sync wholly again

rclone --verbose --progress sync "e:\backups\" gdrive:backup
2019-10-09 16:39:09 INFO  : Google drive root 'backup': Waiting for checks to finish
2019-10-09 16:39:09 INFO  : Google drive root 'backup': Waiting for transfers to finish
Transferred:        6.086G / 13.970 GBytes, 44%, 4.493 MBytes/s, ETA 29m56s
Errors:                 0
Checks:                33 / 33, 100%
Transferred:            0 / 1, 0%
Elapsed time:       23m7s
Transferring:
 *                                     blank.txt: 43% /13.970G, 4.671M/s, 28m48s

can I send only changed chunks?

Nope, you cannot.

It's not at that level of the file unfortunately.

Rclone can only work on the file level unfortunately.
That's not really rclone's fault, but more of a limitation of the cloud systems, as they themselves work on a file-level.

If you wanted to have something akin to block-level incremental backup you would need to use some kind of software that could render the block-level changes to a file - which you could then upload normally. Several of these files could then be worked backwards to "go back in time" in a backup without needing to fully update changed files each time. You'd trade more CPU power needed to recreate the backup for bandwidth savings.

I would not be surprised if such functionality exists somewhere, but I also wouldn't expect it to be standard. You would have to do a little research I imagine.

And if you do find something like this - by all means do share it with the community as it would be useful to know. I would personally be interested in that too.

I've been using Duplicacy (CLI version) since last year and I'm very pleased with the performance, consistency and reliability.

It splits the files into chunks and sends only the modified "pieces". Chunks can be configured to have fixed or variable sizes, each type having better results depending on the file type and use case. This setting is per storage.

It creates backups in such a way that each incremental backup is also a full snapshot (which in its nomenclature is a "revision") that can be restored independently of the others.


The setup is a little hard, and the nomenclature is sometimes confusing, but it is an excellent software, I don't know another one with the same performance (and I tested several: Arq, Duplicati, restic, borg, and others)

I currently use Rclone to move / copy files and Duplicacy for backup.

www.duplicacy.com

Yes this method is not quite what I was describing - but this (chunking) would be an alternative way to do it for sure.