Rclone copy two gdrives

hi,
I am using rclone to do daily backup from one gdrive to another and I wonder if each file rclone is copying means one api count so with a lots of files I can reach api limit ?

Well, it's at least one API call if not more (the exact details of this can be very spesific to exactly what you are doing). That said, it's incredibly unlikely you will run out of your daily API quota. In fact you have so many total calls that it's not even remotely possible to hit the max with a single user.

A single user can do something like 864.000'ish API calls in 24hrs and that's due to the 1000calls pr 100 seconds limit pr. user. Even this isn't going to be your real limitation.

The real limit (aside from your bandwidth) will probably be the ca 2/sec file operations, so in other words new files can't start to transfer more often that this (but they can continue transferring if already started). This does mean transferring 10.000 files almost empty text files does take some time even if the total size is trivial - so this is normal. You may want to consider archiving some folders like this to reduce the total files you have to work though. It will increase the effective speed of upload and download drastically.

And finally of course, in terms of quota - you do have to be aware of the 750GB/day upload limit.

Do you already use server-side copying for your drive-to-drive needs? If not you should look into that - especially if your personal bandwidth is limited.

I have VPS to do the copying.
I syncing two different gdrives on it but speed sems to be to slow for server-side copying. Had about 80MB/s but those are on different accounts and I wonder if it is possible to do server-side copying.

Set this in your Gdrive remote config:
server_side_across_configs = true

or set this as a flag in your commandline:
--drive-server-side-across-configs true

to enable server-side copying. It is not enabled by default because it is not guaranteed to work in all edge-cases, but I've never had a problem with it.

Be aware that server-side copying seems to have a strickter quota than the regular 750GB/day. Some say it's 100GB/day, but my experience indicates it's more along 200-300GB'ish. It's not documented by Google and more research is needed to find a better estimate on it - but just be aware that this is a thing, so if your transfer suddenly seems to stall out then you probably hit that limit.

Server-side transfer between Gdrives can go anywhere from seveal hundred MB/sec to a few GB/sec (mostly determined by how large the files are. Few small files go faster than many small ones).

I need to include this in both drives ?
ok, I will set flag with daily limit for 300 GB but it should be enough.

I'm not sure if you need to set it in both. I think only the source drive needs it.
When enabled you will get messages indicating it was server-side at the end of each file - assuming you use -v (verbose) output. It will also be pretty obvious from the speed (if you use --progress to monitor)

There's no special server-side command, it will just happen (when possible) when using regular copy, move, sync ect.

ok, got an error while added server_side_across_configs = true to rclone config,
got 404 error code

What does 'rclone version' show?

rclone v1.48.0
- os/arch: linux/amd64
- go version: go1.12.6

Can you grab the latest and retry? If still not quite right, can you sure the full log with -vv?

[ID1]
type = drive
client_id = xxx
client_secret =xxx
scope = drive
token = xxx
server_side_across_configs = true

[ID2]
type = drive
client_id = xxx
client_secret = xxx
scope = drive
token = xxx
server_side_across_configs = true

config should look like that ?

That means that the destination user doesn't have enough permissions to read files in the source.

So is it possible to change that as those are the gdrives ?
but it is odd as I can do the copy with command

rclone copy gdrive1: gdrive2: -P -v

Documentation states server-side is disabled by default because it's not guaranteed to work between all sorts of different setups. I've never had a problem, but I've only used it between teamdrives, so my experience of using this feature is somewhat limited.

NCW is probably one of the few that knows the details about how and why certain combinations have problems. I don't know if your 404 indicates your two setups don't work for server-side copy. Hopefully someone else can chime in with more info about that.

The only permission settings I can think of are the general user-level you have on your Gdrives (admin, editor, read-only ect.) and what sort of scope you have used when you set up your Oauth or service-account when you created it - but I don't know if these permissions will really be relevant for this spesific problem as long as you generally have the normal read and write permissions to save new files (which I assume you do already).

That is weird now as I can`t even copy now using normal metod

rclone copy gdrive1: gdrive2: -P -v

but while using

rclone copy gdrive2: gdrive1: -p -v

where gdrive2 has no own client ID I can do normal copy

That probably means you should take a look at your Oauth setup. Check the scopes are right, try to have both on the same account and if all else fails perhaps try to use the same Oath on both remotes. See if that makes a difference.

I am basically making educated guesses here, but I'd fiddle around with that a bit and see what you can get working, because it definitely seems like a permission issue.

Will do, if not I will just run normal rclone on some cheap VPS

Google Compute Engine tends to be a great option, especially for google-based drives. Cost of basic hardware is pretty trivial, and network traffic inside google's network is free. You even get a bunch of free credits to start with which effectively should mean free operation of a simple linux box for quite a while before you even pay anything.

The 404 means the user logged into the destination drive doesn't have permissions to read the file from the source file. It gives a 404 error for security reasons I think so it doesn't give anything away about which files exist or not.

The 404 error means that it isn't going to work.

If I could figure out a way of reliable determining that it does work then I would enable the flag automatically.

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.