Syncing multiple users to respective counterparts on different domains/accounts

This is your issue. Super Admin can not directly access users' files. You you can list and audit, but you won't be able to pull the actual file. That is why you're getting unauthorized_client.

Check out this page for a starting point/summary of service accounts and what you need to delegate--you don't need the part about making API calls obviously:
https://developers.google.com/identity/protocols/OAuth2ServiceAccount

There is another alternative that I think you should consider--gsuite has a migration tool of its own. I strongly suggest you look at that as a backup to the rclone solution you're building, if this one seems to not work the way you need it to, as it will a) not use your bandwidth when copying and b) not be subject to the transfer caps. It involves a similar setup to what you're trying to do with rclone, but it will run completely server-side (for all I know they're even using some rclone code on their backend, I haven't read the license for gsuite lol). It's in your dashboard.

A lot of people use it for migrating non-gsuite to g-suite. But you can use it for g-suite to g-suite.

That does solve your "must be a new account issue" and, the main reason I think it's worth mentioning, is that it meets your initial requirement of using a CSV to specify source and destination, which I don't think anyone has offered a solution for as of yet. You can then use rclone check to make sure you got the results you wanted.

I'll look into that thank you.

That option doesn't work for my needs

I did some additional testing and checked for service account, here's mind findings

I created anew service account to make sure it was done properly

So i did create the service account
Domain wide delegation is needed

Doing it on linux this time (using home machine).
So when i try to impersonate, I get this error

rclone -v --drive-impersonate admin@domaina.com lsf remote:
2020/01/26 13:05:54 Failed to create file system for "remote:": couldn't find root directory ID: Get https://www.googleapis.com/drive/v3/files/root?alt=json&fields=id&prettyPrint=false&supportsAllDrives=true: oauth2: cannot fetch token: 401 Unauthorized

When i paste that link in browser, here's what i get.

{"error":{"errors":[{"domain":"usageLimits","reason":"dailyLimitExceededUnreg","message":"Daily Limit for Unauthenticated Use Exceeded. Continued use requires signup.","extendedHelp":"https://code.google.com/apis/console"}],"code":403,"message":"Daily Limit for Unauthenticated Use Exceeded. Continued use requires signup."}}

i think it's pretty clear my issue is with authenticating but i can't find anything anywhere about authenticating.

Other than providing the json file dunno what else to do.

I don't know what I'm missing at this point

Here's my Rclone Conf along with pics of my configs.

Now I'm just spitballing, but make sure the google drive API is actually enabled. That's the one step I don't see in your screenshots.

And also, you blanked out the name of the remote, but you are using whatever text is between the brackets in your config file as the remote name, aren't you? Failed to create filesystem errors can occur if you specify a remote that doesn't really exist, that's why I ask. Although I've never seen them in connection with an authentication error. I'll see if I can replicate this and get back to you.

yes absolutely, in regards to the remote I'm using the proper name.

The only reason I blacked it out is because it's an acronym and realized after I wrote it, that it could be perceived negatively, so i figured to avoid any drama or misunderstanding I'll have it blacked out.

For the drive API, it def is enabled.

is this the only API scope I need to enable?
https://www.googleapis.com/auth/drive ?? Because it's always only been this one that I add.

EDIT:

OK, GOOD NEWS

After fiddling and creating multiple new remotes and testing, I figured out the issue, but I don't understand WHY it's happening.

fix

I've isolated the issue to selecting the type of scope in rclone.

When i use drive.readonly it gives me the 401 Token error.

When i use drive (full access), then it works.

If you're able to tell me why it works only as drive full access it would be great, but at least I figured out what the issue is.

Thank you very much for your assistance.

I think it’s because you have to match the scope to the access. See here:
https://developers.google.com/identity/protocols/googlescopes#drivev3

If you want to fiddle with it, try changing the scope to https://www.googleapis.com/auth/drive.readonly and see what happens. You should be able to use read only access then.

Sorry, I didn’t realize you were requesting read-only...

Also I notice you switched from using the environment variable to the —impersonate-user argument. Does your command still work if you go back to the environment variable or does that still throw errors (curious for my own use).

I tried using both to see results, what I realized though is that I always have to go back and delete the root file tree from the config everytime I use the environment variable. executing a new "set RCLONE_CONFIG_DOMAINA_IMPERSONATE..." wouldn't work unless file tree in the config is blank.

You can use unset instead of set, and then run set again.
Or chain them in to one line.

What I noticed though is that when i use the command set, the config doesn't get updated with root folder ID

It only updates the root folder ID when i do a command regarding the remote afterwards.

When i tried the command unset it said that it's not recognized as an internal command

I'm also having an issue the command move which is basically the second step of this thread:

Since there was a similar thread, I simply added to it.

If you think you can help, let me know.

is using rclone mount an acceptable option?

you could:

rclone mount remote: ~/rclone -v
cd ~/rclone
mkdir testmove
mv * testmove

It will throw an error, but will move the files anyway.

NB:
Try this on a non-production something-or-other before diving in face first. And don't unmount until you see that rclone mount is done doing its thing. It is not instantaneous.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.