Edit service account while mounted

No that won't work. Changing the auth stuff needs rclone to reauthenticate with drive which only happens at startup.

It might be possible to make a backend tool to do that, so some gogle drive backend specific code which says to do a re-authenticate. You'd probably have to pass the new config in that call, so something like rclone backend reauth drive: -o service_account_file=/path/to/json

I'm pretty sure the change while mounted is working, because otherwise I'd be running into issues everyday

Even using the union remote, they all use the same service account so even if the union was doing some kind of load-balancing by default, it wouldn't work

It is? If you just changed the JSON file and not its path I could see that working, but I don't see how the config could get from the config file into the drive backend without restarting the drive backend.

I just run this command:

rclone config update $REMOTE service_account_file $JSONDIR/$COUNT.json

As of right that is correct it will not update the mount with the remote being updated that way. We have tested this many times and it is why in that script you pulled that from we added a line to restart the mount. As of now that is the only way I can figure out to update the mount with a new SA. Unless ncw comes up with some way like he said with the backend tool but until then you have to restart the mount to see any effect from this.

If restarting the mount causes issues to applications running it, that's not a option for me :confused:

Yeah it sucks but at the moment there is no way to do this sadly.

What do you mean @sunnywilson09?

It might be possible to do a failover type thing with mergerfs since it allows changing options for online mounts. I saw this and am also working on changing it to trade off between multiple gdrive rclone mounts: https://github.com/Cloudbox/Community/wiki/Configure-Rclone-VFS-automatic-failover-To-Plexdrive

Won't that use a lot of api hits checking the file every minute?

I don't think that's a clean solution to this issue.

API hits are really never an issue as the project gets 1 billion per day.

image

If you did 10 per second for every second of the day, you'd only get 8.6 million, which would be a hefty feat in itself as that's the most a single user can do with the default user quota.

Most of those are dir-cached hits anyway so no API hit even happens.

MergerFS allows you to add and remove what srcmounts it uses while the mount remains online. I'm trying a situation where I have two rclone mounts for the same remote and a mergerfs combining the two. This way, I can easily remove one of the remotes from the mergerfs mount, change out the service account credential, restart that rclone mount, and then re-add it to the mergerfs mount once it's back up. Since there are two rclone mounts for that mergerfs mount, if I remove 1, it still behaves as normal.

And I'm monitoring the rclone logs to see when I get 403 errors and swapping them off based on that rather than checking if I can read a file.

None of the solutions here actually work if the issue you are having is hitting the download quota for the individual files...

So there could be that every file in my remote would be unavailable because I hit the download quota, but the control file didn't so the script would think everything is ok.

Also it's possible that hitting the control file every minute would hit the download quota for that file, then causing the script to act without necessity....

We need a cleaner way to do this.

We need to read the logs and detect 403 download quota errors, then switch to another mount.

Or even better, load balance between the mounts so that the 403 error never happens...

This is what I was trying to explain with my solution. Its a passive monitoring system with two identical remotes (with different service accounts). I monitor both logs individually and when one starts 403ing, I update the remote config to a new service account and re-add it to the mergerfs. It all happens with the mount staying online because everything is accessed through a mergerfs frontend mount

But have you automated the logs monitoring? Otherwise it's pointless.

Also I still think the best solution is to load balance between the mounts so that issue never happens in the first place.

Even if we don't restart the mergerfs mount, I highly doubt you can just do that and not impact applications already reading data...

You probably don't want to combine the two but point mergerfs to one of the remotes at a time, monitor if that remote hits a limit and then switch it for the other and monitor that one, once that hits a limit switch it back and so on and so forth. I already tried merging a local path and two remotes and ran into this issue: https://github.com/trapexit/mergerfs/issues/742

The suggestions from the dev appeared to work at first glance but I got plex playback issues and trashcans regardless.

The problem is that rclone presents files even when they really aren't accessible. mergerfs policies work by using 'stat' to find the file and then the filesystem function in question is called on that. rclone still presents the file and the file is stat-able but then the open fails. mergerfs doesn't retry a different branch on primary function failure.

Of course mergerfs could be rewritten to always retry everything on all possible branches but I don't think that's a good policy. It'd possibly hide failing drives. It'd also require a significant rewrite. The problem isn't just open. In this case the quota AFAIU can kick in during reads so you'd have to retry reads too and that's a different can of worms.

There are a couple things that could be done on the rclone side to address this as well. It could fail stats. If you still want files to show up then could allow readdir too but just fail stats. Or fail both. Once quota is hit just error out all readdirs and stats.

@ncw How difficult would it be to have that as an option? Have it so that on a quota error it'd just return errors for some timeout period?

Regarding the issue you found. It'd be helpful if you reported problems back to the ticket. What playback issues? What trashcan issues? Are you talking about playback issues because the quota got filled and rclone returns an error to mergerfs?

The problem is a bit unique with Google Drive and their API to my understanding.

The 403 errors only come up when you try to download a file and stat just lists the file so that works. I believe the first to a drive.files.get while the second just does drive.files.list from the API.

Ah. In that case I guess it'd be more work on the rclone side to manage it.

Perhaps I could simply create a custom policy that used open instead of stat to find the file. @ncw ... when a file is opened what does rclone do? What I'm getting at is... how expensive would it be if mergerfs was crawling across drives using an open and close pair to find a viable file to use? That wouldn't help with reads cut off by quota (does that happen?) but it'd address this immediate issue.