Rclone copy from cloud to cloud

Hello wonderful people.

I’am having a small problem with rclone, last time I wanted to backup some folders from my google drive account to my second one, rclone deleted my files from the first folder. Like using command like:

Rclone sync gdrive1:movies gdrive2:movies

Rclone sometimes then deleted my files from gdrive1:movies but not every time! I have to say that I moved a lot of files in a out of the “movie” folder once the transfer was completed! Could that be the error?

I now want to backup my hole gdrive1 to gdrive2 with the command:

Rclone copy gdrive1: gdrive2:

Is that the right command? I’am afraid to loose all my files in gdrive1 :blush:

Hoped it made any sense and someone can help me out

Best regards

It is, copy will never delete any of the files.

1 Like

In theory sync should never touch anything in the source but will delete from the destination

If it is deleting from first account you might want to file big report

That is 100 % correct - if you see otherwise then please file a bug report!

Iam doing it atm (reencrypting ecnfs to crypt) but I started with rclone copy and once whole lib is done will do the rclone sync and check.

The reason I started with copy first was i got a bit scared what if my encfs acd drive gets disconnected and rclone sync just delete all in destionation as source is empty ( is there a way to make it so rclone asks for confirmation )

Current progress

Transferred: 20673.635 GBytes (61.846 MBytes/s)
Errors: 307
Checks: 2443
Transferred: 12753
Elapsed time: 95h5m1s

I have some weird internet drops since I started doing it eg server looses connection for 15 to 30 seconds ( hence quite few errors due timeouts )
Started to measure

Also in 95h i got this error twice

2017/01/08 02:09:40 amazon drive root ‘crypt’: 401 error received - invalidating token

Thanks for the inputs guys - I took the chance and did it with the copy command, and it works like a charm! Last time, i must have used the sync ! That deleted my files sometimes! Maybe the reason was server lost connection from time to time!

Copy command works like a charm :slight_smile:

This isn’t supposed to happen! If there is an error listing, rclone should stop syncing. I’d be interested to try to reproduce this.

Thats the thing if encfs mount is disconnected it would be listed as empty and not an error. For rclone it would look like all source files were deleted. ( eg here it would be nice to have switch that would prevent deleting destination if source is empty )

1 Like

Ah ha! I see.

So an option like --abort-if-empty or something like that?

I could also implement this option from rsync which would probably be useful.

       --max-delete=NUM
              This tells rsync not to delete more than NUM files  or  directo‐
              ries.   If  that  limit  is  exceeded, all further deletions are
              skipped through the end of the transfer.  At the end, rsync out‐
              puts  a warning (including a count of the skipped deletions) and
              exits with an error code of 25 (unless some more important error
              condition also occurred).

              Beginning  with version 3.0.0, you may specify --max-delete=0 to
              be warned about any extraneous files in the destination  without
              removing any of them.  Older clients interpreted this as "unlim‐
              ited", so if you don’t know what version the client is, you  can
              use  the  less  obvious --max-delete=-1 as a backward-compatible
              way to specify that no deletions be allowed (though  really  old
              versions didn’t warn when the limit was exceeded).

yea abort if empty would be great, not so sure about max-delete in this case.
maybe --abort-if NUM eg 0 empty, 100 if not at least 100 files are present in source etc…

For such a potentially destructive action, maybe it’d be better to make it opt-in rather than opt-out (e.g. --continue-if-empty) and abort by default in those situations. Plex does something similar when it comes to library scans and emptying the trash - if the drive isn’t mounted then it ends the task early (otherwise people would lose all their media).

Amazon Drive is preforming quite good today

2017/01/11 17:50:03
Transferred: 350.882 GBytes (75.783 MBytes/s)
Errors: 0
Checks: 2805
Transferred: 313
Elapsed time: 1h19m1.2s

update: and climbing

Transferred: 640.638 GBytes (80.977 MBytes/s)

Can you make an issue about that please? The --abort-if-empty flag and the --max-deletes then I’ll get round to them eventually!

I have some sympathy for that, but it would diverge from rsync usage which I’m trying not to do.

Its done
123467890
123467890

Ahh gotcha! Thanks for the explanation.