I’m trying to set up a Google Drive remote with a service account as described here, the only difference being that I want to use the https://www.googleapis.com/auth/drive.appfolder scope instead of the https://www.googleapis.com/auth/drive one, and it doesn’t work.
If I put https://www.googleapis.com/auth/drive.appfolder in the “One or More API Scopes” field (last point under “2.” in the description) and use this config:
[drive-appfolder]
type = drive
scope = drive.appfolder
service_account_file = <path to json-file>
root_folder_id = appDataFolder
I get this error:
$ rclone -v --drive-impersonate <email> ls drive-appfolder:
2019/01/05 12:59:12 Failed to ls: couldn't list directory: Get https://www.googleapis.com/drive/v3/files?alt=json&fields=files%28id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%29%2CnextPageToken&pageSize=1000&prettyPrint=false&q=trashed%3Dfalse+and+%28%27appDataFolder%27+in+parents%29&spaces=appDataFolder: oauth2: cannot fetch token: 401 Unauthorized
Response: {
"error": "unauthorized_client",
"error_description": "Client is unauthorized to retrieve access tokens using this method."
}
If, on the other hand, I put https://www.googleapis.com/auth/drive in the “One or More API Scopes” field and use this config:
[drive]
type = drive
scope = drive
service_account_file = <path to same json-file>
Hmm, I suspect --drive-impersonate isn’t compatible with appfolder. I tried searching for a combination of the two and I couldn’t find anything though.
At the moment, I need to choose between a) using a service account with the drive scope and having rclone data exposed via the web interface and b) using the OAuth 2 token flow with the drive.appfolder scope and not having rclone data exposed via the web interface.
Neither option is ideal, but (unless you have other ideas) I’ll probably stick with a) since all of my rclone data is encrypted so having it exposed via the web interface is more of a nuisance than a potential security threat…
Wow! I’m impressed. I just tried the linux/amd64 version and both of the above configs now work as expected with their respective scopes. Excellent. Thank you very much.
When do you expect an official release with this fix to be out?
Anyway, with this config (and https://www.googleapis.com/auth/drive.appfolder,https://www.googleapis.com/auth/drive as API scopes):
[drive]
type = drive
scope = drive,drive.appfolder
service_account_file = <path to json-file>
root_folder_id = appDataFolder
impersonate = <email>
cleanup now works – which is to say, it works like the Empty trash button on the web interface in that it removes both app-specific and non-app-specific trash. I actually think this behaviour is a bit dangerous, but, of course, out of your hands. (The Empty trash button on the web interface may remove things that are not actually shown in the Trash folder on the web interface! At least with rclone you can fiddle with the root_folder_id and trashed_only options and SEE both types of trash.)
One last thing…
If I create a folder, say test7, via the web interface, and use this config:
[drive]
type = drive
scope = drive,drive.appfolder
service_account_file = <path to json-file>
impersonate = <email>
rclone shows the folder as both trash and non-trash! Surely this must be a bug in rclone?
Rclone shows all folders in trashed-only mode otherwise you wouldn’t be able to see the files underneath them. So you can have a non trashed folder with trashed files in.
That is the reason why…
I’m not particularly happy with the --drive-trashed-only interface but I couldn’t think of anything better!
Well, it confused me, so I’m not particularly happy with it either.
Maybe --drive-trashed-only=true should only consider a non-trashed folder if it has one or more trashed descendants (files or folders). This will probably make operations more expensive (I guess you need to do an initial depth-first traversal of the filesystem tree and mark relevant nodes), but at least --drive-trashed-only=true will work as advertised – each (leaf) file or folder processed will actually be “trashed-only”.
A file system traversal will make it super expensive to run as directory traversals are really slow in drive.
If you could build a query using the v3 API query language it could be made fast, but I don’t think it is possible to express relationships of more than one level…
$ rclone version
rclone v1.49.0
- os/arch: linux/amd64
- go version: go1.12.9
Any way to have multiple, independent appDataFolders? I'd like to dedicate one appDataFolder to a restic backend and use another appDataFolder for everything else. Using different client IDs/secrets doesn't help, e.g., with this config:
Hmm, my understanding is that you are doing the right thing... How about removing the impersonate line - if you are just wanting app folder access you shouldn't need that.
i.e., two UUIDs, so as not to give up any metadata....
Anyway, I mount crypt0 with "rclone mount" and access drive0:84260d93-85b9-4b15-9ae8-49537b99130b exclusively from restic ("restic --repo rclone:drive0:84260d93-85b9-4b15-9ae8-49537b99130b"). Does this seem like a reasonable approach? Any potential issues with caching of crypt0 and concurrent restic access to drive0:84260d93-85b9-4b15-9ae8-49537b99130b?
Also, one other thing. I would like to run "rclone mount crypt0:" with the "--poll-interval=0" option, but if I do that and then run something like "rclone rc sync/move /tmp/dir crypt0:" against the rclone instance doing the mount then the contents of /tmp/dir disappears until I explicitly run "rclone rc vfs/refresh".
This behaviour is rather annoying if you're using an @Animosity022 -like setup with mergerfs and moving a dir from "local" to "remote". It seems to me that by running the "move" command through "rc", the rclone instance doing the mount has all the information it needs to update the directory cache correctly. Am I wrong?
That sounds fine. The directories don't overlap so there won't be any trouble with concurrent access.
You can give a path to vfs/refresh to make that nice and efficient.
You would have thought so, but unfortunately rclone isn't that clever yet! The move is effectively independent of the mount and it doesn't go through the VFS layer so doesn't update what the mount is seeing.