So I started a sync with google drive and set it to drive.file scope. As client_id/secret weren’t needed, I left them blank. Upon further research and log files filled with rate limits from the global client_id, I made my own. Sync continued happily, realizing there were existing files.
Later on I decided to delete some files and move things around with rclone move and get these messages on some files:
2018/09/04 22:26:23 ERROR : Attempt 3/3 failed with 1 errors and: googleapi: Error 403: The user may not have granted the app #### write access to all of the children of file xxxx., appNotAuthorizedToChild
Eventually I found that if I go and remove my client_id/secret, I could move those files just fine.
In other cases I found that if I copy a file without client_id, I couldn’t see it with client_id set. So I’m not quite sure how this all works - my rclone sync when I changed my client_id appeared to see files that were unchanged (I run with verbosity set to DEBUG), but I’m not positive anymore. One of the folders I did a
Nevertheless, is there any way to convert access on these older files so my client_id can have full permissions on them?
Or is everything completely invisible for one client_id to the other with drive.file scope set? If so, can I do a full rclone purge on the blank/global client_id and not affect my personal client_id (which now is fully sync’d) when the folder names are the same? If this is the case, how does the write access error occur?
Also, if this was the case, wouldn’t my personal client_id see its own folder with the name I’m referring to, and have full write access to everything underneath? Why does that error even come up then? It shouldn’t be able to see the global client_id’s folder under that namespace.
I’d recommend some notes about this be added to the client_id/scopes section.
The drive.file and drive.appfolder scopes will only grant permissions to the app (client) that creates them. Apps are identified by their client_id and client_secret, so by changing these values you effectively turned rclone into your own app which does not have the permissions required to access the files created before.
If I get the Google documentation right, the drive.file scope will get the same directory root as the drive scope, so both apps might see the same folders, but different files, depending on the permissions.
Have you checked the drive web interface, where your files are placed? It should have access to all files. You might also be able to change the permissions to allow the new client_id to access the files.
If the drive.file scope is not based on permissions, but on the creating app (as shown in the web interface), there is probably no way to transfer the access to the new app, besides uploading the files again with the new client_id.
I will experiment with the drive.file scope later today, since I never tried it before.
Yeah, seems obvious to me now, but I think the client_id section should include one sentence or phrase about those scopes being tied to it, and no way to change them.
Yeah, I chose drive.file specifically so I could see everything in the web interface, and I see two folders and different files in each one. It’s still not quite clear why I got errors when one scope tried to delete the other, it shouldn’t even be able to see the other’s files?
Ah, so all folders have read access? Perhaps this may explain the permissions error: it was able to delete all files and folders for one client_id, but then through the same namespace was able to see the other client_id’s folder and couldn’t remove it.
It would be nice if there was a way to do this through a global permission web interface, though I guess the reason I chose this scope is that if all else fails, I can create one with global permissions that can write to it.
Out of curiosity, is there a way for rclone to access by drive folder_id instead of namespace?
I made some tests with the rclone client_id and my own. rclone can definitely only see the files and folders that were initially created by the matching client_id. Therefore it’s not based on the share permissions and probably cannot be changed afterwards.
A little documentation is a good way to help other people not get bitten by this. I think the explanation fits better in the scopes section, since it only affects drive.file and drive.appfolder and not other scopes.
I can’t explain this either. After changing the client_id, rclone should not have seen those files.
No, my guess there was wrong. I got multiple folders with the same name when trying to copy files into a existing folder. Folders created by the wrong client_id are hidden.
You can limit rclone by setting the root_folder_id parameter for your drive remote, but there is no way of enforcing this using scopes or permissions. The access token is still valid for the full scope.
So I’m still dealing with lingering effects of this. Some folders / files within my crypt folder with my personal client_id I don’t have write access to as before, but since then I’ve removed the entire root folder of the generic client_id (side note: I emptied the trash, but when refreshing it, it still shows some files, but I presume this will go away at some point). Before that, I was able to switch to the generic rclone client_id and run the purge command to that same namespace, and then both the generic client_id and my personal client_id were happy with purging.
Before I purge these folders with a full write access scope, is there some debug information I can get to try and track down how this occurs and how many files are affected?
The errors are the same as above, and reference children of an encrypted file. But when I search for that file in google drive web access, I can’t find it. The error again, for reference:
indent preformatted text by 4 spaces2018/09/06 17:30:18 ERROR : Attempt 3/3 failed with 1 errors and: googleapi: Error 403: The user may not have granted the app 2870356##### write access to all of the children of file 186l5vzHWicvseGqiXrF3MPwORMVXXXX., appNotAuthorizedToChild
2018/09/06 17:30:18 Failed to purge: googleapi: Error 403: The user may not have granted the app 287035#### write access to all of the children of file 186l5vzHWicvseGqiXrF3MPwORMXXXX., appNotAuthorizedToChild
I was hoping to be able to search for 186l5vzHWicvseGqiXrF3MPwORMXXXX and find that it is owned by rclone (instead of my personal client_id, which I’ve named rclone-personal) and remove it myself and iterate through.
I found some posts like this when searching for the error message, but it’s not clear if this error is intended or not.
It sound like the drive.files scope cannot delete all files, but I could not replicate this.
I tried to upload files using the drive scope and update or delete them with the drive.file scope and vice versa, but this all worked fine with no errors for me. This was all done using the original rclone client_id. When using the drive.file scope, I still couldn’t see any files or folders created by my own client_id.
To debug problems like this you could use the --dump bodies flag, to get the raw api responses and match the check the ids in the web interface or API Explorer to get specific file attributes. The created by app information seems to be only visible in the web interface.
Can you replicate this error with a set of rclone commands so I can test this myself?
So while I could not get it to succeed with purge with my personal client_id, just running delete on that same client_id appears to work (though it took a long time). It left empty directories and running rmdirs a few times eventually cleaned it up (the early ones don’t appear to have reported errors, just didn’t finish the job): Reported lots of “contains trashed file” messages, as emptying trash from delete takes a while.
In the end, it appears nothing in that folder wasn’t able to be deleted, it’s just that purge didn’t work, while delete + empty trash + (multiple) rmdirs eventually did.
So perhaps it has nothing to do with client_ids, and more with the issues you’ve found in searching (but haven’t reproduced yet). This folder had lots of nested folders, but I’ll try and distill it down to a simple reproducible scenario when I have time.