[Solved] Server Side Sync (GDrive) suggestions

#1

Hey there,

It’s been a while since I last posted :slight_smile: I’m in the process of refreshing the local infrastructure and one of the thing I did was revisiting the current scripts I run for offsite backups.

Right to the question:
One of the script performs a sync from Google Drive, to (the same) Google Drive (using two different remotes with different APIs).

Would it make more sense to add an extra process and speed things up a little? I was thinking:

  1. Run rclone copy myremote:/folder1 myremote:/folder2
  2. Run rclone sync myremote:/folder1 myremote:/folder2 --backup-dir myremote:/folder3

This would normally run daily and upload about 10/50GB circa. I don’t care about saving my bandwidth, I care more about saving operations/uploads/downloads seen from Gdrive to avoid getting a ban for the day.

As of right now with just sync I’m downloading these files from folder1 and uploading them back to folder2, so I guess adding first a copy, said copy will be performed server side instead, resulting in a quicker job? Or am I wrong?

Thanks

#2

I don’t think running a copy then a sync will buy you anything. The sync will do everything the copy does so I think you just want step 2

Using the same remote as source and destination will mean rclone does server side copies which should be quicker, but they have their own API limits (GB transferred per day).

#3

I don’t think running a copy then a sync will buy you anything. The sync will do everything the copy does so I think you just want step 2

I was more thinking that the copy will copy everything from folder1 to folder2 on the server side and then the sync will just go and delete whatever is no longer in folder1. Shouldn’t this save me some GB transferred per day?

So instead of:

  1. Copy from local to myremote:/folder1 [this wasn’t mentioned in my original post as it happens with a different script]
  2. Sync myremote:/folder1 with myremote:/folder2 [So, download from folder1 and upload again on folder2]

I would do:

  1. Copy from local to myremote:/folder1 [this wasn’t mentioned in my original post as it happens with a different script]
  2. Copy from myremote:/folder1 to myremote:/folder2 [server side copy, no extra downloads/uploads]
  3. Sync myremote:/folder1 with myremote:/folder2 [This in theory will only delete data from folder2 which is no longer in folder1?]

Am I wrong?

#4

Server side copies have their own quotas and you are creating an extra step that really doesn’t add value.

If the goal is to sync, just sync.

#5

The sync will do server side copies too, so it will cost you an extra directory traversal rather than saving you anything I think.

#6

The sync will do server side copies too

Woooo… :slight_smile: This is what I didn’t know. I was reading the documentation and thought sync wasn’t gonna do server side copy, so I stuck to read copy instead.
I just tested this with 20GB worth of data and worked as you said.
Thanks man!
I’m gonna rewrite a bit my script to use a single remote and keep it with a single sync command.

#7

I wonder if that could be clearer in the docs. As the developer I know that sync, copy and move are basically the same command with a few flags, but I think that isn’t obvious from the docs…

#8

Would it make sense to actually include in the synopsis of sync/copy/move that the command will happen server side if the same remote is used (and the remote supports it)?
When I was searching (can’t find it now), I saw your reply to an old thread on the forum where you mentioned copy/move will happen server side.
Either ways glad to see it does :smiley:

1 Like