Google Client ID and Secret - Reuse

Hi,

I'm trying to move a very large amount of data -- ~12TB -- in a relatively short amount of time. Because Google imposes a 750GB upload limit per user, I have enlisted multiple friends and multiple machines to parallelize the data transfers.

I have 9 users, 3 source machines, and 7 destinations (Team Drives). Is it OK to reuse the same OAuth client id and secret pair across all of the remotes I'm going to have to config on each machine? Or do I need to generate a new pair for each user? Or some other approach?

I have two Red Hat Linux 6 machines and one Mac running High Sierra. Rclone version is v1.48.0.

Thanks!

To the best of my knowlage the limits work like this:

The upload limit is tied to the account - so if you have 7 drives on different accounts then you should have 750GB/day x 7 to use if you paralellize.

The API limit is mostly tied to the specific Oauth user (there is a total limit too but it is much higher and rarely a limiting factor). If you use 1 Oauth across many different upload points this will work fine, but you will have to share the 1000 api calls pr 100 seconds limit. If they all had different users they would have that same quota to use for themselves only.

I think you can make 10 Oauth users to start without needing to request anything extra from google, so you should have a lot to work with here if you want to do this. Whether you actually need to really depends on how hard you are going to push this. The standard API quota is usually hard to max out for a single user, but shared across 7 you will probably run into problems resulting in a slowdown of the process.

TLDR: I would recommend generating more Oauth users, and then try to distribute the load as evenly across the drives as you can, because the 750GB upload pr drive will ultimately be your hard limiter for how fast you can get that data across I think. If I'm not brainfarting on my math it should be theoretically doable in 2,2 days if you had enough bandwidth to do that.

EDIT: are you moving this data from another cloud service? if so then a google compute micro might also be an alternative to offload the entire process to a virtual server with "unlmited" bandwidth

Thank you, this is great info.

No, the source is several CIFS shares coming from a local NAS device. Slow, but serviceable.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.