Google Drive woes

What is the problem you are having with rclone?

I'm having an issue getting rclone to work unattended with google drive.   Yeah, Ive searched 
and read all of the messages, and if I hadn't already dropped the annual cost into google drive, 
I would be checking other providers.

So, the issue.   I need to be able to do unattended weekly syncs.  Unattended meaning that I 
am unable to copy/paste with my windows box to get the oauth info.  I need a solution that can 
run on its own for weeks and months, etc.

Work done:

I first set it up, did the initial config, and did a sync.  Since the smaller of my folders i need to 
back up is only about a gig, it worked perfectly.  I paid google their money and got me some 
disk space.  At this point it was like 2am so I went to sleep figuring Id do the rest in the am.

Next morning, i try to do a sync so i can see the incremental, and boom need to reauthorize. 
What?  Doc said "initial setup" would need to manual auth.  Well crud.  So, I looked for ways 
around this, googled a bunch of stuff, and decided to try a service account.  I set up a service 
account, downloaded the json, and configured it.

Yeah, you guys know the situation here, Ive seen it in a bunch of posts here.  Service accounts 
have their own disk locations separate from all of the disk i just added to my google drive.  I 
tried this, i tried that, i danced around a bonfire under the full moon, and nothing worked.   I DID 
see some posts about sharing a folder, but it had its own caveats and sadly wont work for my 
situation.  While the uploads worked, it stuck them in a unique drive for the service account that 
I cannot view or even upgrade with larger disk.  This config could work for me, if i could actually 
upgrade that service account disk.  But, I disgress.

Ok, so question is, has anyone found a way around this?   Someone mentioned a program 
called duplicati which apparently gets around this using a delegated oauth service (their 
service does the oauth, probably with selenium or something, and sends it back to you all 
from the command line).

So, bottom line.  Am I boned here with google drive?  IF so, thats fine I can switch to another 
provider.  BUT, the communities I am backing up have their own budget and its already pretty 
maxed.  $20 once a year shared by the few communities is doable, while a monthly fee is not 
so much.  Sad thing is I only NEED like 150gb.

Thoughts?  Ideas?  Flames, even?   I'm open to any input.

TLDR;
Need unattended backups from a headless server, no manual oauth to a headed box.  Google 
Drive SEEMS to be a dead end.  Need 150gb for similar price to drive, but that will work with 
these requirements.  Any input appreciated.

Run the command 'rclone version' and share the full output of the command.

# rclone version
rclone v1.61.1
- os/version: centos 7.9.2009 (64 bit)
- os/kernel: 3.10.0-1160.el7.x86_64 (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.19.4
- go/linking: static
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Google drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

I've tried a bunch, but this is what works and Ive been using:

rclone sync -P  ~/t GDCONFIG:/test/

The rclone config contents with secrets removed.

[GDCONFIG]
type = drive
client_id = *redact*
client_secret =*redact*
scope = drive
token = *redact*
team_drive = 
service_account_file = ~/svcaccount.json

A log from the command with the -vv flag

# rclone sync -P  ~/t GDCONFIG:/test/ -vv
2023/02/05 07:36:06 DEBUG : rclone: Version "v1.61.1" starting with parameters ["rclone" "sync" "-P" "/root/t" "GDCONFIG:/test/" "-vv"]
2023/02/05 07:36:06 DEBUG : Creating backend with remote "/root/t"
2023/02/05 07:36:06 DEBUG : Using config file from "/root/.config/rclone/rclone.conf"
2023/02/05 07:36:06 DEBUG : Creating backend with remote "GDCONFIG:/test/"
2023/02/05 07:36:07 DEBUG : Google drive root 'test': 'root_folder_id = 0AOEZS-4rBBCwUk9PVA' - save this in the config to speed up startup
2023/02/05 07:36:07 DEBUG : fs cache: renaming cache item "GDCONFIG:/test/" to be canonical "GDCONFIG:test"
2023-02-05 07:36:07 DEBUG : Google drive root 'test': Waiting for checks to finish
2023-02-05 07:36:07 DEBUG : servers/my.cnf: Size and modification time the same (differ by -470.852µs, within tolerance 1ms)
2023-02-05 07:36:07 DEBUG : servers/my.cnf: Unchanged skipping
2023-02-05 07:36:07 DEBUG : Google drive root 'test': Waiting for transfers to finish
2023-02-05 07:36:07 DEBUG : Waiting for deletions to finish
2023-02-05 07:36:07 INFO  : There was nothing to transfer
Transferred:              0 B / 0 B, -, 0 B/s, ETA -
Checks:                 1 / 1, 100%
Elapsed time:         0.7s
2023/02/05 07:36:07 INFO  : 
Transferred:              0 B / 0 B, -, 0 B/s, ETA -
Checks:                 1 / 1, 100%
Elapsed time:         0.7s

2023/02/05 07:36:07 DEBUG : 6 go routines active

Sorry, I dont know why that all didnt wordwrap :frowning:

hello and welcome to the forum,

in the debug log, there were no issues or errors?
can you full the debug log of that problem.

--- GDCONFIG, looks a bit strange, as having both
service account file and client_id in the same remote

if you look inside the file, you will see it has its own client_id.
i would remake the remote, using rclone config and choose one but not both.
service account file or client_id

a gdrive remote using client_id will work fine. if the token expries, rclone knows that and will ask gdrive for a new token. nothing for you to do.

Hi asdffdsa, nice palindrome :slight_smile:

And thanks for the welcome.

So, first, there are no issues in this run. I have it transferring. And I had it transferring back when I was only using client_id and secret.

Issue is, after doing a full sync and going away for 8 hours, the token was expired and it required me to do the full "use my windows PC" authentication dance again. Reading other posts, I've read there is no way around this. It will always need some manual intervention on headless hosts. I would LOVE to be wrong here.

And the config in the file, yeah, thats because I did the oauth, and then did the service file without getting rid of the other credentials. Didnt stop it from working, and i flipped back and forth several times during my testing. I just left it like this because it didnt seem to matter. If I had a service account in there, it just worked with or without the clientid/secret.

On your last comment:

a gdrive remote using client_id will work fine. if the token expries, rclone knows that and will ask gdrive for a new token. nothing for you to do.

I've had to do the "reauthorize" dance 3 times in 3 days when only having clientid/secret in the config. Thats when i started looking into service account. Is there some setting i missed to not have it do that?

well, headless host is not the issue.

i would start over, follow the rclone docs, create a new client id in gdrive, create a new remote.
post the redacted config and post a debug log when the issue happens.

that fact that you have the issue, perhaps every 24 hours, is an important hint.
when you created the project at gdrive, did you choose internal or external?
did you publish the app, put it into production, or what?

24 hours is not the limit. First time it was from about 2am until about 9am, so 7ish hours. And it was different every day. 24 hours was not the cut-off.

External.

Yes, I put it in production.

I did this once, and yes, followed the docs both times. But I'll do it again if thats what it takes. (ignoring the definition of insanity)

One more thing. The rclone docs get you close, but Im guessing the on-screen options have changed since they were written, and the docs are more of a "loose guide" than an accurate walkthough. At least on the google developer console end of things.

yeah, the docs can be confusing, gdrive does changes things without notice.
what is not clear?
what do you think might have changed at the google developer console

not sure what yout backup and recovery plan is?

for backups, i use aws deep glacier, approx. $1.00 per 1000GiB per month

Ideally, your actual problem gets resolved but worst case you can always just do something like rclone about GDCONFIG: every hour or so via a cron job to keep the token fresh.

Id need to go back through. Things that seemed, from the instructions, to all have been one flow are now in a few different places instead of on the same page. If I go thru the flow again, ill take better note of whats different.

Ive been asked to give one, but not given funds to do so. and that price is really good for the glacier. This is disaster recovery we're talking about here.

Not a bad idea, ill keep that in mind!

correct, tho glacier has a number of caveats, not a solution for all types of backups policies.

Duplicati has problems of its own. It uses a local database to represent the state of a remote filesystem. I have found that it is extremely easy to get the two out of sync and costly to fix. Just my two cents. Go and try it out. I think you'll gain an appreciation for rclone's design, like I did.

Have you considered Backblaze B2 buckets - which support an rclone interface?
I am not a linux person - mostly windows - but managed to get it all setup with the help of google and this forum.
I remember I had to do some once-off authentication stuff with appkeys etc - https://help.backblaze.com/hc/en-us/articles/1260804565710-Quickstart-Guide-for-Rclone-and-B2-Cloud-Storage

I have around 200GB backed up with them - using rclone scripts which I just run once a week or so.
The scripts are non-interactive so could easily be automated - but I just have not got around to it.
As my files don't change that much I typically pay $1-$2 a month

Are you impersonating the user whose My Drive you want to connect to?

[Source]
type = drive
scope = drive.readonly
service_account_file = C:\Temp\key.json
impersonate = user@example.org

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.