Copy local file to google drive then delete

So reading docs, am I correct in assuming that rclone move wont work to move a file from a local computer to a remote destination like google drive?

Is there a delete-source-after-copy flag that can be used with rclone copy?

sure, you can move a file from local to remote.

if you are new to rclone, be careful.
do testing with flag --dry-run and read the logs.

and read this

      --delete-after                         When synchronizing, delete files on destination after transferring (default)
      --delete-before                        When synchronizing, delete files on destination before transferring
      --delete-during                        When synchronizing, delete files during transfer

It sounds like you just want to use rclone move

rclone move will..

  • copy files from X to Y
  • delete files from X after it has verified they have arrived safely on Y
    (using default settings that is... when the files are deleted can be changed but by default it is "delete after").

If for whatever reason something fail in the copy and the transfer can not be verified then files on X will not be deleted (the log or output will indicate this).

rclone copy is exactly the same except the second step basically...

TLDR: rclone move is very data-safe. If you are not using uncrypted-to-crypted trandsfer you can even enable --checksum to force a checksum comparison of the data for full paranoia-mode. This isn't really required as there are several layers of error-correction involved in the transfer (network layer + data-layer), but feel free to be extra paranoid if you want :slight_smile:

(unencrypted to crypted transfer is not possible to hash-check simply because the hashes are different between identical files of unencrypted/encrypted format). I would not worry overly much about this because as I said - there are several other error-correction mechanisms in play that will detect problems and ask for a re-transfer if necessary.

@thestigma, my friend, perhaps i misunderstood you

as the resident paranoid here, i wanted to make clear that each file that is moved is checksummed, without the need for --checksum flag
i ran a test, the same rclone move, with and without --checksum and no difference in the log files.

each file is:

  1. copied
  2. checksummed
  3. if checksums match, delete source file

the logs make this clear.

2020/02/06 18:30:05 DEBUG : rclone: Version "v1.50.2" starting with parameters ["rclone.exe" "move" "C:\data\\" "wasabieast2:\\" "--log-level=DEBUG" "--log-file=C:\data\rclone\scripts\rr\other\test.txt"]
2020/02/06 18:30:05 INFO : S3 bucket Waiting for checks to finish
2020/02/06 18:30:05 INFO : S3 bucket Waiting for transfers to finish
2020/02/06 18:30:06 DEBUG : MD5 = 628631f07321b22d8c176c200c855e1b OK
2020/02/06 18:30:06 INFO : Copied (new)
2020/02/06 18:30:06 INFO : Deleted
2020/02/06 18:30:06 INFO :
Transferred: 3 / 3 Bytes, 100%, 3 Bytes/s, ETA 0s
Errors: 0
Checks: 2 / 2, 100%
Transferred: 1 / 1, 100%

2020/02/06 18:32:01 DEBUG : rclone: Version "v1.50.2" starting with parameters ["rclone.exe" "move" "C:\data\\" "wasabieast2:\\" "--log-level=DEBUG" "--checksum" "--log-file=C:\data\rclone\scripts\rr\other\test.txt"]
2020/02/06 18:32:02 INFO : S3 bucket Waiting for checks to finish
2020/02/06 18:32:02 INFO : S3 bucket Waiting for transfers to finish
2020/02/06 18:32:02 DEBUG : MD5 = 628631f07321b22d8c176c200c855e1b OK
2020/02/06 18:32:02 INFO : Copied (new)
2020/02/06 18:32:02 INFO : Deleted
2020/02/06 18:32:02 INFO :
Transferred: 3 / 3 Bytes, 100%, 10 Bytes/s, ETA 0s
Errors: 0
Checks: 2 / 2, 100%
Transferred: 1 / 1, 100%

I believe checksum are automatically used whenever they are available (unencrypted->unencrypted) or (crypted->cryptedwithsamekey/salt).

So yes, I think you are correct. (the fact hat MD5 is checked in the debug log indicates this).
But if you want to be 110% sure, I think you need to ask @ncw :slight_smile:

If this (hash-check) is not possible however, there are still several layers of data-security on the network-layer and data-layer that should in 99,99% of the cases ensure an error-free transfer regardless.

well, in the end, i have never used rclone move and have no plans to use it.
the word move scares me

if i am forced to move files, i do the following.

  1. make sure remote provider has file versioning enabled.
  2. rclone sync with --dry-run and check logs
  3. rclone sync with --backup-dir and check logs
  4. rclone check and check logs
  5. manually delete source files.
  6. live long and prosper

but you asked a good question about check-summing in this situation,
about crypted->cryptedwithsamekey/salt

But you are a little bit cray-cray, so not everyone needs that kind of 5x verification :smiley:
Still I do consider your paranoia the gold standard - ie. mathematically verified :wink:

rclone move is 99,99% safe though, Rclone move with --checksum (implicit or implied) should be 100% safe. At that point, double or triple verification should not make any difference. There should be no conceivable way in this reality of a single file not hash-checking the same as the original.

You can probably calculate the odds of that happening despite everything, but I think it would be somewhere in the order of the likelihood of you getting his by an asteroid ever minute for a week straight :stuck_out_tongue:

back in 1978, i suffered a terrible experience i never recovered from.
i was playing the best game of all time, zork, i had, after many dozens of attempts and many days of trying, of getting killed in the troll room, i finally killed the troll!
i was so happy then,
so i went to save my game status, onto an audio cassette tape.
that audio cassette, fragile as it was, got damaged.
i swore then and there, i would never lose data again.
so you can understand

I understand the paranoida, and appreciate it to some extent.
That said - we have come a long way since tape-storage :slight_smile:
These days it's more about the mathematical odds of a correct transfer than the physical failure of storage mediums (as Google and others cover that on their end of things via several layers of redundancy). Still - it never hurts to have your data duplicated in 2 locations - as is good practice for all backup strategies. That's how I go about it anyway. Any data that is truly irreplaceable needs 2 separate backup locations. Usually the truly critical data tends to not be all that much...

(you really love Zork don't you? I played that way back in the day but I can't say I recall much of it these days. I am more in the Zelda1/Baldur's Gate generation I guess :slight_smile: )

Dude, you're not resident, you're the President. :stuck_out_tongue:

1 Like

Just an FYI to clarify what I was trying to accomplish, and why I am using move ...

(1) video.mp4 is created and placed in local folder /my-backups

(2) video.mp4 is copied to put in local folder /my-uploads

(3) rclone move takes local file /my-uploads/video.mp4 and moves it to [MySFTP]:"waiting-for-uploads" on a remote server.

(4) Remote server then takes /waiting-for-uploads/video.mp4 and moves it somewhere else on the server and does a bunch of other stuff with it.

So the reason I need to use MOVE is because when the file is placed on remote server, it is then moved somewhere else. If I left a copy in local /my-uploads then everytime rclone copy runs, it would want to reupload to the remote server.

(1) local file stays in backup fodler
(2) COPY local file to another local folder
(3) use RCLONE MOVE to move it to a remote server
(4) move file on the remote server to another folder on the remote server

I have it all working great now... thanks

Rclone move works like this....

  • check srcfile and dstfile are different
  • if they are different or dstfile does not exist
    • transfer srcfile to destination
    • after the transfer is complete check size/checksums
    • if that is all OK then
  • delete the srcfile

So checksums checked at all times.

PS I'm paranoid about data loss too! Too many war stories to go into here, but just know that just because you are paranoid about your data, it doesn't mean that the universe isn't trying to corrupt it!


Hi all. Just a very minor heads-up here.

I copied 11T to GDrive (weeks) with rclone v1.50.2 encrypted. Did a "check --download" on 3T to make sure that things had worked, and ... SURE ENOUGH, they hadn't. (whAAAT?!?!)

A few files ?6? didn't match but had odd errors -- truncated file and missing blocks, one was a 0-byte file. Unhappy, but recopied those files up again. Did another check and: about 20 OTHER files now didn't match up. Doing a random check of files shows absolutely no differences that I can find, it fails ONLY using check. So not that they're not actually there or that it's a bug, but it seems like ?multithreaded file reading? has problems with an old kernel. Maybe.

I'm chalking this up to using a current rclone but under an old version of Debian (Jessie, 8.11) that I can't upgrade. Compared a few others manually, and yes they all match as well. Doesn't seem to be an actual problem but sure WAS a surprise

@asdffdsa - Zorkers do it under the rug. Did you ever meed Floyd the Robot in a different Infocom game? Ever try Original Adventure? (Drop Bird. Blast!)

1 Like

@C_B - Would suggest you start a new post if you have an issue, use the question template and we can help out.

The versions in the repo are years and years old usually so I would not use them.

finally, in this forum, i am not alone, a fellow zorker!

sure, floyd is a long-time friend and i have played the clossal cave.

I don't think rclone check --download should be using the multithread download code.

Do the checks fail consistently with rclone check --download? It look like from what you said that they don't which makes me suspect your machine - maybe you some bad RAM or something similar.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.