Server Side Transfers Not Working

What is the problem you are having with rclone?
Server side transfers not working. All transfers are being download then uploaded.

What is your rclone version (output from rclone version)
v1.50.2

Which OS you are using and how many bits (eg Windows 7, 64 bit)
Ubuntu 18.04.3 LTS / 64 Bit

Which cloud storage system are you using? (eg Google Drive)
Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)
rclone sync -v --transfers 16 gcrypt: gcrypt2: --drive-server-side-across-configs -P

A log from the command with the -vv flag (eg output from rclone -vv copy /tmp remote:tmp)
https://pastebin.com/urE9DjH5

I appreciate any help as I'm out of ideas.

hello and welcome to the forum,

you cannot have server side transfers when using encrypted remotes.
rclone has to decrypt the file from gcrypt:, re-encrypt it and upload it to gcrypt2:

however, if both encrypted remotes use the same passwords, you can copy the encrypted files using server side transfers from the underlying remotes that the crypted remotes uses.

@thestigma can explain it all.

1 Like

Hi, thanks for the response.

They are using the same password and salt. What would the proper command be if I am copying from gcrypt:/gdrive/encrypt to gcrypt2:/backup?

wait a minute, here comes @thestigma

Yes, basically what he says...

Let's assume we have these remotes to work with

  • Gdrive1
  • Gcrypt1 (Gdrive1:\Gcrypt)
  • Gdrive2
  • Gcrypt2 (Gdrive2:\Gcrypt

If the two Gcrypts use different keys there is no way of server-side transferring it, because the server can not change the data for you (decrypting and then re-encrypting it with the second key). If this is the case you MUST pipe it via either your local PC or... you could use a GCP virtual machine in the cloud to do it for you (and that could be set up to be free).

But... there is a smarter way to do it also.
Let's assume that you decide it is ok to use the same crypt-key on both Crypt-remotes.
Then you can do it like this:
rclone sync Gdrive1:\Crypt Gdrive2:\Crypt

This just asks the server to move the (encrypted) files from one place to another - and since they use the same key they are readable in both ends. This can be server-side transfered no problem. As soon as you reference your Gcrypt remotes however you are asking for a decode to happen, and that will happen locally - so don't do that. Just reference the underlying Gdrive remotes instead and it becomes a non-issue :slight_smile:

The latter method is what I use myself if I want to move or backup something to a different Gdrive because it obviously makes the process vastly faster and simpler than piping it via local.

2 Likes

No, see the example above. Then ask if anything is unclear.

Protip1:
You can alternatively set this in your Gdrive1 and Gdrive2 remote configurations (in rclone.conf)
server_side_across_configs = true
instead of using the flag. Then you don't have to include it in your commands. Rclone will just use server-side whenever it can.

Protip2:
The second thing people tend to have issues with is permissions.
If you have these accounts configured under 2 different users that do not have cross-access to eachother then you will run into permission errors.
It is important that at least the user used for the destination-Gdrive also has access to read the source Gdrive. On teamdrives this is very simply solved by adding the user(s) to both accounts. On non-teamdrives you have to use the sharing function which is a little more cumbersome - but works fine once set up.

Note that while content manager should be enough to freely read/write files - during a server-side move I have observed that the destination's user may need full manager access. Since a move implies deletion there seems to be an issue there when trying to move (or just delete) server-side on an the other account - especially if that data was uploaded from another account. Just be aware of this in case you see errors.

And as always, feel free to ask for clarifications :slight_smile:

2 Likes

Looks like that fixed the server to server transfer. I can see the encrypted files when I mount gdrive2: but I'm unable to see the files at all when mounting gcrypt2. How would I make sure I am using the crypt wrapper when mounting?

The command I used was:

rclone sync -v --transfers 16 gdrive1:/gdrive/encrypt gdrive2:/backup: --drive-server-side-across-configs -P

And thank you for the protips. I might be using Protip1 incorrectly. I placed the flag under "root_folder_id". Transfers went back to local when I removed the flag from the command.

I believe I have Protip2 properly configured as it isn't giving me a 404 error.

Thanks for the help!

There's a couple of things that might be the problem - all of them should be fairly trivial to fix.

  • Make absolutely sure both crypts are set up to use the same crypt key and salt. If they do not and you effectively try to decrypt using the wrong keys or settings the result will be garbage that won't even display as files (rclone log will be full of errors indicating this if so). The same thing applies to crypt options (whether or not to encrypt filenames and foldernames). These settings must be identical also. TLDR: You can really just copypaste the crypt configuration, give it a new name and alter the line for "remote = blah blah". Then you are definitely sure it's an identical config.

  • Make sure your Gcrypt2: is actually configured to point to Gdrive2:/backup because well - that's where the files are. Obvious I know, but it can be easy to make such simple mistakes.

To use these files you should mount Gcrypt2:
This will use the encrypt/decrypt layer.

I think that pretty much covers everything I think can be the problem. If you see the files (using Gdrive2:) then the transfer worked, and the rest is just a matter of doublechecking the Gcrypt2 config.

You can place the line anywhere in the Gdrive1 and Gdrive2 blocks. See this example:

[TD1]
type = drive
scope = drive
service_account_file = XXXXXXXXXXX
team_drive = XXXXXXXXXXXXXXXXX
upload_cutoff = 8M
chunk_size = 128M
server_side_across_configs = true
disable_http2 = true

(don't worry about any extra settings here that you don't use - they are not relevant to this issue, I just copied something I had in my config).

2 Likes

I guess my password and salt weren't the same after all. I set them up during the initial addition to the config, but I guess that didn't work as planned. Thank you for the help!

Not sure why adding the flag to the config didn't work the first time but I will give it a try again.

Thank you for your help!

Perhaps you only added it to the source remote?
In a server-to-server transfer the destination remote is basically the "active" one doing the work, so that will need that line too. I would recommend just putting into both and you should have no issues.

Happy to help :slight_smile:

1 Like

Hey Guys,
do you have any suggestions for rclone sync command to use server-side copy between 2 GDrive accounts on Google Cloud Compute not to be banned because of google quota? I need to backup 40+ TB data to another GDrive account.
The following occured after 12 hours:
"Failed to copy: failed to open source object: open file failed: googleapi: Error 403: The download quota for this file has been exceeded."
Transferred: 747.144G / 747.144 GBytes, 100%, 15.293 MBytes/s, ETA 0s
Errors: 42440 (retrying may help)
Checks: 3197 / 3197, 100%
Transferred: 1558 / 1558, 100%
Elapsed time: 13h53m48.6s
I think much of the errors came after the 24h ban...

I used the following command:
rclone sync series_gsuite1:/ series_gsuite2:/
Both series_gsuite1 & series_gsuite2 are crypt with the same pass.

Did I missed something?

Could the following extra parameters help?
rclone sync series_gsuite1:/ series_gsuite2:/ --drive-stop-on-upload-limit=true --bwlimit 8.5M

Thanks in advance.

Firstly, you can't get "banned" for using up the upload quota. This is an incorrect term.
If this happens you are simply stopped from uploading more until the timer resets (usually sometime near midnight, but that seems to vary a bit depending on where you are and what server you are talking to).
All other functions will remain operable and it's not like you get a black mark on your account from Google when this happens. That is simply the limits of the terms of service and there is an automatic limiter that makes sure this is not exceeded.

You may find these flags useful:
--bwlimit 8.5M
This will choke the upload to a speed that can run 24/7 and (barely) never hit 750GB assuming you don't also upload additional data to the same account from elsewhere in the same period.
--drive-stop-on-upload-limit
This will force an rclone transfer to stop as soon as it sees the transfer-quota error.
This can be very useful if you prefer to schedule a script to run daily for a shorter time and get the quota transferred fast - for example if you don't have a machine that normally runs 24/7.
--max-transfer 740G
This will stop the transfer after 740G (since you started the transfer). This does not track what you did earlier, so I find it less useful because if you ever have to stop/restart to change a setting or something you probably didn't keep track of this yourself. I find it is large superseded (on Gdrive at least) by the previous command that is a newer addition.

As I explained above (post #5) you can not transfer from/to a crypted drive if you want to server-transfer. As soon as you access a crypt remote you will need to decode - and decode can not happen on Google's servers (they don't even have your key after all) so then the traffic needs to go locally.
If you do this you end up with:
Google -> local-decrypt -> local re-encrypt --> Google

But it is still possible to transfer the crypted files server-side as I explained above.
All you need to do (assuming all crypt settings are the same on both source and destination) is to transfer the raw files. No decrypting or reencrypting. They will remain as they were (encrypted), which is fine. This also keeps the hashes identical which is a great benefit for syncing. (decrypting and recrypting the same file actually results in a different hash because the salt is unique each time due to security requirements for a good crypt).
To do this you simply need to bypass the crypt-layer on both sides and transfer the relevant folders directly between the Gdrive remotes. Because this will result in a straight bit-for-bit copy, it can be done server-side without problem (see example in post#5).
Then you effectively get:
Google --> Google
all rclone is doing then is providing instructions of what files to transfer where - and the authorization to access the files. The server can handle the reast within Google's network.

If you are in doubt if you are server-side transferring or not check:

  • Bandwidth graph in task-manager (or equivalent). it should be fairly obvious by bandwidth usage if you are downloading/reuploading a lot of data (you should not if it's server-side).
  • enable verbose output --verbose (or simply -v for short)and look at the end of each transfer line. It should say something about "server-side" if that is being used for each and every action.

Currently as far as I am aware, max-transfers does not work with server-side copying as the copies are done server-side.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.