Questions to save my bandwidth

1st question:
I have a question about a file transfer.
for now I use the option retry = 1, but there is little possibility to optimize it, finally hope ^^
I have a folder that contains files for writing, one of them weighs 200 GB!
this implies a send, but which will not be validated in the end, despite wasted bandwidth.
unfortunately, its name changes, and it's not the only file of this size.
in my case I can not filter on the size, nor on the name, but just on the fact that it is being modified. a regular scan on the size a simple cheksum will allow to detect this during the sending and to stop the transfer. but is it possible?

2nd question:
I must merge 2 folders (Local linux => Google Drive), knowing that there are identical files (Hash), and that the name can differ, is it possible to avoid a transfer if the file is already present ( Hash)?

I use
`rclone v1.50.2

os / arch: linux / amd64
go version: go1.13.4`

If a file is being modified, it should not be transferred as it would say it's being written to and error on that file.

You can check by running the command with "-vv" and look at the output.

rclone uses a few methods. Google drive supports a checksum (on a non encrypted remote) so it would use that to compare along with file size / date and time. You can many options to compare files as you can use just size, just time or just the checksum.


thank you.
actually, my destination is encrypted at google drive.
here is my output for example, he realizes that at the end of the transfer the size differs:

2019/12/12 03:51:05 INFO:
Transferred: 437,378G / 438,653 GBytes, 100%, 43,655 MBytes / s, ETA 29s
Errors: 4 (retrying may help)
Checks: 318/318, 100%
Transferred: 31/32, 97%
Elapsed time: 2h50m59.4s
 * Cloud Nas / Snapshots / {8â € |a5e6-e50ecdc63f0b} .vdi: **99% /195.377G, 23.920M / s**, 54s

2019/12/12 03:52:03 INFO: Cloud Nas / Snapshots / {8d276f4e-f3dc-4bbc-a5e6-e50ecdc63f0b} .vdi: Copied (replaced existing)
2019/12/12 03:52:03 ERROR: Encrypted drive 'GGD1_Crypt: / xxx / VMs': IO errors have not been deleted
2019/12/12 03:52:03 ERROR: Encrypted drive 'GGD1_Crypt: / xxx / VMs': IO errors have not been deleted
2019/12/12 03:52:03 ERROR: Attempt 1/1 failed with 5 errors and: corrupted on transfer: 2166446 vs **different sizes** 2189216
2019/12/12 03:52:03 Failed to sync with 5 errors: 2166446 vs 2189216

regarding the merger, the destination is also encrypted at google ...

so there is no solution?

Can you include a debug with "-vv" on it?

je lance le traitement cette nuit, je vous donnerai le debug demain.

concernant la fusion des 2 dossier par Hash dans un dossier crypté n'est donc pas possible. ?

I sent you my Log in MP

Why do you have "--retries=1" setup? It seems to be writing to it, but since it only retries once, it fails and moves on. You should just leave that out and use the default.

It looks like something is writing to it and it's changing.

yes indeed, the retry = 1 is an error.
is it possible to detect the modification during the transfer in order to avoid sending 200GB and then cancel the transfer?

It can also be something just getting corrupted in the transfer. A retry would correct the upload.

what I am trying to do, is a verification of the size / modification date / checksum during the transfer in order to avoid continuing it knowing that it will necessarily be different from the source ...