Google Drive: Delete Shared Folders

What is the problem you are having with rclone?

Hello, I'm new to rclone and I'm struggling to figure out if there's a way to accomplish a task that I initially thought would be easy, but it seems it may not be as straight forward as I'd have hoped.

The high level TLDR:
Use Shared Folders in Google Drive and "rclone sync" to a Docker container. Then process the files and remove them from Google Drive using "rclone delete" when the container is given the command.

The details:
I'll start with the folder structure of my Google Drive (which I'm the account owner of).
I have a folder that I created called "users" which exclusively contains different shared folders (named using a uuid) that I've created for authorized users to sync content into. The content of any given user's folder will be more folders each containing one or more files to process.

Example:

users/              (I own this)
├─ 123456789/       (I created this and shared it)
│  ├─ Boss1/        (The user will have their Google Drive desktop client create these and their respective files)
│  │  ├─ log1.evtc  (These files and folders will trickle in over 3hrs and then be processed and deleted on command)
│  ├─ Boss2/
│  │  ├─ log1.evtc
├─ 987654321/       (I created this and shared it)
│  ├─ Boss3/
│  │  ├─ log1.evtc
│  │  ├─ log2.evtc

This users folder is being sync'd to my Docker container using "rclone sync" with this being the file driving the syncing.

And this container specific config file:

{
	"defaultSource": "/",
	"defaultSyncOptions": "",
	"syncInterval": 10000,
	"remotes": [
		{
			"name": "google-drive",
			"source": "/DiscordBots/Blaze/arcdps",
			"destination": "/arcdps",
			"syncOptions": "--create-empty-src-dirs"
		}
	]
}

So far - I have all this working.

Where I get stuck is when I want to "rclone delete" those "Boss" folders and their contained files so I'm down to just the shared folders again.

Desired final state after deletion:

users/
├─ 123456789/
├─ 987654321/

When I attempt the command in the section below I get insufficientFilePermissions.

I've considered the idea of something like "rclone move" or "rclone bisync", but I don't know if there is a workflow that would yield quite the same desired behavior.

The theorized workflow looks like this:
A player has a third party logging tool on their PC that automatically creates these Boss folders and their respective logs after any given attempt of the fight (accumulating all successful or failed attempts) at a desired location which they will configure to their assigned shared folder. This will be sync'd to my Google Drive from their Google Drive desktop client. My Docker container will poll my Google Drive for changes every few seconds/minutes and sync the content of all the users folders into the container. The container will then be given a processing command and upload those logs to a different third party service (again, to this point everything is working). Once uploaded the contents of the user's shared folder should be deleted (this is where I get a permission error).

Run the command 'rclone version' and share the full output of the command.

root@d591ecb8ed6c:/# rclone version
rclone v1.61.1
- os/version: ubuntu 18.04 (64 bit)
- os/kernel: 5.10.16.3-microsoft-standard-WSL2 (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.19.4
- go/linking: static
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone delete google-drive:/DiscordBots/Blaze/arcdps/users/123159732146929666 --rmdirs

The rclone config contents with secrets removed.

[google-drive]
type = drive
client_id = ...
client_secret = ...
scope = drive
token = {"access_token":"...","token_type":"Bearer","refresh_token":"...","expiry":"2022-12-19T05:27:55.9934681-06:00"}
team_drive = 

A log from the command with the -vv flag

root@d591ecb8ed6c:/# rclone delete google-drive:/DiscordBots/Blaze/arcdps/users/123159732146929666 --rmdirs -vv
2023/01/12 04:00:00 DEBUG : rclone: Version "v1.61.1" starting with parameters ["rclone" "delete" "google-drive:/DiscordBots/Blaze/arcdps/users/123159732146929666" "--rmdirs" "-vv"]
2023/01/12 04:00:00 DEBUG : Creating backend with remote "google-drive:/DiscordBots/Blaze/arcdps/users/123159732146929666"
2023/01/12 04:00:00 DEBUG : Using config file from "/root/.config/rclone/rclone.conf"
2023/01/12 04:00:00 DEBUG : Google drive root 'DiscordBots/Blaze/arcdps/users/123159732146929666': 'root_folder_id = 0AP-aMbxMsFVIUk9PVA' - save this in the config to speed up startup
2023/01/12 04:00:01 DEBUG : fs cache: renaming cache item "google-drive:/DiscordBots/Blaze/arcdps/users/123159732146929666" to be canonical "google-drive:DiscordBots/Blaze/arcdps/users/123159732146929666"
2023/01/12 04:00:01 DEBUG : Waiting for deletions to finish
2023/01/12 04:00:02 ERROR : Slothasor/20230109-230034.zevtc: Couldn't delete: googleapi: Error 403: The user does not have sufficient permissions for this file., insufficientFilePermissions
2023/01/12 04:00:02 ERROR : Slothasor/20230109-225533.zevtc: Couldn't delete: googleapi: Error 403: The user does not have sufficient permissions for this file., insufficientFilePermissions
2023/01/12 04:00:02 ERROR : Slothasor/20230109-225201.zevtc: Couldn't delete: googleapi: Error 403: The user does not have sufficient permissions for this file., insufficientFilePermissions
2023/01/12 04:00:02 ERROR : Attempt 1/3 failed with 4 errors and: failed to delete 3 files
2023/01/12 04:00:02 DEBUG : Waiting for deletions to finish
2023/01/12 04:00:03 ERROR : Slothasor/20230109-225533.zevtc: Couldn't delete: googleapi: Error 403: The user does not have sufficient permissions for this file., insufficientFilePermissions
2023/01/12 04:00:03 ERROR : Slothasor/20230109-230034.zevtc: Couldn't delete: googleapi: Error 403: The user does not have sufficient permissions for this file., insufficientFilePermissions
2023/01/12 04:00:03 ERROR : Slothasor/20230109-225201.zevtc: Couldn't delete: googleapi: Error 403: The user does not have sufficient permissions for this file., insufficientFilePermissions
2023/01/12 04:00:03 ERROR : Attempt 2/3 failed with 4 errors and: failed to delete 3 files
2023/01/12 04:00:03 DEBUG : Waiting for deletions to finish
2023/01/12 04:00:04 ERROR : Slothasor/20230109-225201.zevtc: Couldn't delete: googleapi: Error 403: The user does not have sufficient permissions for this file., insufficientFilePermissions
2023/01/12 04:00:04 ERROR : Slothasor/20230109-225533.zevtc: Couldn't delete: googleapi: Error 403: The user does not have sufficient permissions for this file., insufficientFilePermissions
2023/01/12 04:00:04 ERROR : Slothasor/20230109-230034.zevtc: Couldn't delete: googleapi: Error 403: The user does not have sufficient permissions for this file., insufficientFilePermissions
2023/01/12 04:00:04 ERROR : Attempt 3/3 failed with 4 errors and: failed to delete 3 files
2023/01/12 04:00:04 DEBUG : 8 go routines active
2023/01/12 04:00:04 Failed to delete with 4 errors: last error was: failed to delete 3 files

Hi DragonRulerX,

Very good and detailed explanation!

I understand that the main issue is being unable to delete files in the folders you have shared with others:

I am not expert in Google Drive, so this is mostly a couple of curios questions to get the ball rolling:

Are you able to delete the files from the Google Drive web site?

Did you consider the possibility that new files may have arrived after your sync?
(this can perhaps be solved with a --min-age filter on your deletion, if duplicate processing is OK)

Hello Ole,

I'm sorry, I should have included your first question in my original post since I did try to test that myself before coming here.

Are you able to delete the files from the Google Drive web site?

Yes. The folder that is created by the shared user (users/<uuid>/Boss) within the shared folder I created for that user (users/<uuid>) can be deleted using both:

  1. The website to delete it.
  2. My PC's Google Drive folder and removing it from there.

Did you consider the possibility that new files may have arrived after your sync?

If I'm understanding the concern - I believe that is currently expected. Let's say hypothetically my crew and I game on Monday's from 8pm-11pm. Prior to that 8pm session I'd expect their users/<uuid> folder completely void of content and directories. During the session that folder will accumulate new folders (users/<uuid>/Boss1, users/<uuid>/Boss2, etc) each containing a log file for every attempt at the boss. The acquisition of that content being driven by their Google Drive desktop client service syncing content to their designated shared folder (users/<uuid>) in my Google Drive. When the session concludes I'll command the Docker container to upload their logs. Upon completion of the upload the container will attempt to delete all of the content of their users/<uuid> folder (leaving just the users/<uuid> folder when done) to be prepped for the next gaming session.

No problem!

I guessed you already tried, just wanted to be sure.

I have reproduced and can confirm your observations using an plain Google Drive (not team drive), also tried with the default rclone client ID. Same result. I think the issue is that the userfolders and files created within each user folder are owned by the creator and therefore protected from your (accidental/malicious) deletion.

This is confirmed by this extra observation: When you delete the folders/files in the website then they are not really getting deleted, they are just getting moved out of the folder owned by you and into the respective owners "My Drive".

I don't think you can (easily) get around this and propose you instead let the files stay in the folder and then only sync the latest files using something like --max-age=3d or --include="**/20230109*"

Perfect, just wanted you to be aware - concurrent modification of shared files is difficult and can cause some nasty issues.

I think the issue is that the userfolders and files created within each user folder are owned by the creator and therefore protected from your (accidental/malicious) deletion.

I noticed this as well and the first thing that popped into my head was that I was going to have to get authentication tokens for every user and really didn't like that idea (mostly because I figured my users would get skittish from that process not really knowing what type of access to their Google Drive they'd be giving me).

I ... propose you instead let the files stay in the folder and then only sync the latest files using something like --max-age=3d or --include="**/20230109*"

This is an interesting proposition and it might work for a while, but I'm a bit worried about the accumulation of data over extended time.

A part of the process I didn't mention yet is a man-in-the-middle (MitM) script I'll have to share with my users that will effectively watch their PC's local log related folder (not the one they'll have in Google Drive, but the one for their third-party logging tool) for changes and will create a copy of those changes over to their Google Drive folder in their PC. This script is required because some users use other third-party tools that track all of their logs for historic comparisons for various reasons and, sadly, it's tracked by referencing the files on the local PC rather than any cloud based storage location. Knowing this, I inquired about the size of their log folders and we're talking several GB for some. Multiply that by X users and the default 15 GB max storage provided by Google Drive quickly becomes insufficient. Therefore, I figured if I use this MitM approach I could relay copies of only the pertinent logs I need for this upload command to my Docker container via a shared location (Google Drive) and then promptly delete them from that shared location. This allows users to continue to accumulate logs on their local PC's as they please, but would then allow me to control the accumulation of data into this system as well.

Honestly, Google Drive was selected as this staging ground since I figured it'd be easier than standing up my own FTP server. For the most part this works, but this little gotcha where folders created by others within folders you own not respecting the top level ownership in rclone was not something I expected.

I don't think you can (easily) get around this

I'm open to a more complicated solution if it works as long as we can avoid creating my own local FTP server and/or putting any extra burden on my users besides them just giving me their email to set access permissions on their shared folder.

What I'd really like would be to figure is out how to just setup the real Google Drive service in a Linux shell since that would completely bypass the need to use rclone or similar API's to control the Google Drive folders, but my Linux knowledge isn't quite at that level just yet and (surprisingly) I haven't seen any tutorials online for that. However, I recognize this forum isn't purposed for that kind of help so I don't want to begin exploring that if we can solve the problem with rclone.

How about making this (MitM) script delete older files from the Google Drive staging folder?
(I guess it executes with owner access to the files from the respective user)

How about making this (MitM) script delete older files from the Google Drive staging folder?

That's not a bad idea, but I'm not sure how it would know if the file was transferred to the Docker container before it deleted it? There probably is some way given that Google Drive shows a synchronization status symbol on the icons of files/folders, but I'm not sure how to read the status of that. If I could know that status then I could assume once it has been marked as synced that I could delete it after some interval greater than the sync interval I have rclone sync running. I'm assuming that approach may be non-trivial and possibly need third-party tools installed on the user's PC to work though?

Perhaps you know that you docker sync (using --max-age or similar) runs at least once per week, and then you can safely delete everything older than a month in the staging folder.

Perhaps it is possible to move the processed files into a to-be-deleted subfolder after they have been processed. That may not need file ownership like deleting. I haven't tried, deleted the setup again.

Perhaps you know that you docker sync (using --max-age or similar) runs at least once per week, and then you can safely delete everything older than a month in the staging folder.

This feels a bit error prone if the user loses connection to the internet prior to their files being sent to Google Drive. It would delete it locally on their PC before it ever sync'd to the staging area.

Perhaps it is possible to move the processed files into a to-be-deleted subfolder after they have been processed.

This was an interesting suggestion so I gave it a shot and there were some interesting, but sadly unusable results. The results are probably best explained visually (assume 123456789 is my uuid):

           Before rclone move           |            After rclone move
----------------------------------------+----------------------------------------
 users/              (owner: 123456789) | users/              (owner: 123456789)
 ├─ 123456789/       (owner: 123456789) | ├─ 123456789/       (owner: 123456789)
 ├─ 987654321/       (owner: 123456789) | │  ├─ Boss1/        (owner: 123456789)
 │  ├─ Boss1/        (owner: 987654321) | │  │  ├─ log1.evtc  (owner: 987654321)
 │  │  ├─ log1.evtc  (owner: 987654321) | │  ├─ Boss2/        (owner: 123456789)
 │  ├─ Boss2/        (owner: 987654321) | │  │  ├─ log1.evtc  (owner: 987654321)
 │  │  ├─ log1.evtc  (owner: 987654321) | ├─ 987654321/       (owner: 123456789)
                                        | │  ├─ Boss1/        (owner: 987654321)
                                        | │  ├─ Boss2/        (owner: 987654321)

Interestingly, if the folder did not exist in my directory then the folder shows me as the owner so it must create the folder on my behalf, but the ownership of the file does not change which means I still cannot run rclone delete afterwards.

I guess what's bothering me is why can I use my Google Drive application on my PC and delete the files myself vs using rclone to do so? Originally, I agreed with your theory of:

However, thinking about it from the desktop client perspective I feel like this may be incorrect. The owner of the shared folder and any person marked as an "Editor" added to the shared folder will have permission and control to do any Create Read Update Delete (CRUD) operation through their PC's Google Drive desktop client. So then that begs the question of why rclone too does not follow this behavior? To me, this feels like an accidental oversight and one that may need a bug ticket to address, but I don't truly know the implementation of rclone or any of the limitations it may have interacting with Google Drive.

I cannot follow your thoughts here.

You are processing results from Guild Wars 2 which is an MMORPG, and then you are afraid that one of the players loose the internet connection after a boss fight and doesn't reconnect within the next week or so?

Not quite, my idea was to do something like this:

users
    123456789
    987654321
        to-be-processed
            Boss1
                log2.evtc
            ...
        to-be-deleted
            Boss1
                log1.evtc
            ...

then it will be quite simple to sync the files to be processed to docker, and then move them to to-be-deleted upon completion, and then some time later delete the content of to-be-deleted from your MitM script.

I don't know, but guess the Google API has a delete command and a "detach" command, and that rclone only uses the delete command in order to do a real deletion where space is freed and trash can be handled according to expectations and supplied parameters (e.g. --drive-use-trash=false).

What happens if you try deleting with other similar tools, e.g. CyberDuck?

So, what can happen is a person joins for a Raid session and accumulates logs. Mid session they disconnect due to technical issues on their end (which happens a lot). We reach the end of the session and perhaps an hour or day later they regain connection. They want to post their logs through our Discord server which ultimately sends the command to the Docker container which does the upload process. If we set the MitM script to delete files at a set interval their logs may be gone before they regained connection since the MitM script would only be monitoring their local files regardless of connectivity with some fixed time interval logic.

Ah, I see. So, you'd suggest that the "to-be-processed" folder be where the MitM script syncs to. Then you'd have the Docker container issue the upload command, but rather than attempt to rclone delete we'd instead do rclone move to the "to-be_deleted" folder at the end which wouldn't need the same permissions. Lastly, the MitM script would watch for changes to the "to-be_deleted" folder and execute the deletion natively on the owner's PC which implicitly grants them the right to remove those files/folders which fully bypasses the need for permissions. Clever. A bit round-about, but it might work.

The only thing I noticed was that the directories were left behind when I used rclone move. I don't have any particular major storage concerns with empty folders, but I'd like to know if I did the command wrong to fully relocate the folders or if that is another quirk of the folder being owned by someone other than me? I'd like an empty users/<uuid> folder when I'm done if possible. I don't have the exact command off-hand that I used, but it was the basic command listed in the docs with no extra flags to produce the above results. I had to jet shortly after so I didn't have much time to play with it.

Honestly, this all got me thinking of the problem from a different perspective and lead me to connect my localhost log folder which I had stored in Google Drive already to Docker using volumes. By linking the localhost into the container I gave the container permissions to modify my PC's log files directly. This meant when I deleted a file in the container it also deleted it from my PC which in turn prompted Google Drive to remove it using the permissions of my PC which successfully deleted it from the shared folder. It came with an odd quirk that I need to investigate of creating desktop.ini files in the folders in my Docker container, but that's easily managed. This technically removes the dependency of needing to use rclone, but I'd prefer to have a dedicated container that behaves like Google Drive so this is all easily migrated between machines if ever needed. Having a local file system structure dependency isn't ideal for that.

One thing I found recently that might solve this is:

It seems to be a CLI Google Drive tool which is precisely what I'd need in a Docker container to make this all work. However, if this does work then it'll bring back in to question why rclone didn't work.

That's the first I've heard of that so I'm unsure. I may try playing with this if I have time. I'm expecting a busy work week.

Exactly!

There is a big difference between making a dedicated Google Drive tool solely tailored to Google drive concept and API and to make a multipurpose tool with many backends combined into a common file system concept (superset). That is a bit like comparing a specialized vehicle to an all-round vehicle - each has the strengths and weaknesses - and should be used in different situations.

CyberDuck is a multi backend tool like rclone, that is the reason I chose that as comparison/inspiration.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.