I have about 1TB in an encrypted Google Drive remote. I regularly delete, then restore this content, due to a lack of self control. I need a way to delete it such that it is not recoverable, or that the recovery process would be so ridiculous that it isn't worth the effort. Currently, I'm running a while loop to overwrite the files with 0b empty placeholder files with the same name. I got a file list using
rclone lsf -R crypt: > file-list.txt and I'm just uploading that over and over. It's very slow since there's over a quarter million files total. I'm thinking that maybe
rclone moveto would work, is that right?
For example, if I were to open the gdrive: remote, not where I can see the decrypted file names, etc... Could I simply use rclone moveto and pipe a list of all the encrypted files in, and rename them sequentially or randomly, wouldn't that make it impossible to access later? And Google Drive wouldn't see that as a delete, but a rename, so restoring it after deletion would preserve the new name and not the old name, and my crypt keys wouldn't work anymore since the encrypted file names have changed.
That sounds less expensive than 100+ upload passes of over a quarter million blank files.
- os/version: ubuntu 20.04 (64 bit)
- os/kernel: 5.11.0-46-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.17.2
- go/linking: static
- go/tags: none
Google Drive, data is stored in a Crypt remote
count=1 while [ $count -lt 102 ] do echo "STARTING PASS $count" while read a; do touch "$a"; done<file-list.txt rclone move . crypt: --no-check-dest --exclude file-list.txt -P let "count=count+1" done
I realize that the top of this post says "No Exceptions" but I figure I'll post configs and logs only if its human-requested here. Rclone is working, I just need methodology advice.
Did some of the keywords in my post cause me to get filtered by the spambot? What happened here?