Well, checks will only happen if there are already files by the same name as those you are moving there.
This is what happens under the hood:
(1) rclone makes a list of all the local files
(2) rclone asks the remote for a list of all the files in the destination folder you designated
(3) rclone compared these two lists. Are there any overlapping names in the same places? If yes, then we have to check them to see if we should skip, only edit attributes, or upload. This could be needed for 0 files, or all the files depending on what is in the destination folder from before.
(4) rclone now knows what to do for each file - and it executes the plan in the most efficient way (not overwriting a file with an identical file for example, as that would be a waste of time and resources).
Try this experiment:
- First upload some files to a new empty folder. There should be 0 checks, but several transfers
- Then, repeat the exact same transfer again. This time there will several checks, but no transfers (because all the files were already there so rclone just skipped them all after comparing). This second time you notice rclone will finish very fast...
As you might imagine, this means you can cancel any transfer at any time and just run the same command again and rclone will figure it out and just resume where it left of... you never need to wait to finish a whole operation if you don't want to. Nothing will break and very little progress will be lost.
rclone will check stuff like this (size and modtime) automatically. It won't count a file as transfered until it is sure it has arrived healthy and whole. It will also usually check checksum (which is the most accurate) if it is a "free" operation (if both the source and destination has precaclulated hashes. your local system almost certainly does not).
You can force a checksum check if you add the flag --checksum . Your local system can use the CPU to calculate this on the fly even if it does not have a filesystem that stored checksum data. Doing this always requires all files be read fully, so it may require the harddrive to work a bit more.
Feel free to use --checksum if you are a little paranoid about it, but it is not really necessary. First of all size+modtime is already pretty accurate. Secondly there are several layers of error protection at work at the transport-layer (TCP) and protocol layer (HTTP). It will be very rare for all these to not be able to detect an error.
rclone does this by default. If errors happen - there are multiple layers of safety that can detect it - and if that happens rclone will retransmit the data. How much progress you lose and need to re-do depends on the upload-chunk size. Unless you are on a very unstable connection this is something you don't have to worry or adjust because transmit errors of one sort or another will happen very infrequently.
somewhat unrelated sidenote:
If you want speed however, I certainly recommend upping the upload chunk size from 8M to 64M as this can give you a pretty massive boost in bandwidth utilization (for files larger than 8MB):
--drive-chunk-size 64M (can alternatively be set in the config using a slightly different format if you prefer)
Do be aware that 64M chunks mean you can potentially use (64MB x numberOfTransfers) megabytes of RAM though. For example 256MB on 4 transfers. Just make sure you don't run out of RAM or rclone will crash.