I have about 15 TB of data in gdrive but I want to encrypt it. Is it possible to do simple directory and file obfuscation remotely or would the process need to re-download everything to obfuscate it?
I would hope doing something like rclone copy cache:dir crypt:dir2 would do it remotely but just want to make sure before I waste bandwidth.
“cache:” is a cache wrapping a gdrive remote.
“crypt:” is a crypt wrapping the above cache.
Thanks @ncw, I assumed that’d be the answer In any case, I’m already running all of this in a server on a DC so it’s alright, I just wanted to avoid using bw if I could.
I also figured I can rotate service accounts to upload faster using something like this:
for i in $SERVICE_ACCOUNTS
do
rclone copy -vP --drive-impersonate xxxx@yyyy.com --drive-service-account-file /root/$i --max-transfer 700G gdrive:experimental_tv crypt:encrypted/tv
done
Now, a question about --max-transfer: I’ve read that this will simply cut the transfer up when the process has reached this transferred amount. Will a subsequent run of the copy catch files that were cut off (based on lower filesize on the remote) and re-upload them? I’m banking on this otherwise the script won’t work haha
Crap. It actually didn’t work. I assume it has to do with the fact that all the accounts are impersonating the same user. Any workarounds other than the team drive stuff?
It works (as long as you don’t run it multiple times per day per service account because it’s transferring 700 gigs per account). Each account has access to the same team drive. I’d like to know if there’s a way for rclone to fully abort after a certain amount of 403’s (or something similar) because I can see the process hanging for long periods of time if one of the rclone instances for whatever reason starts running into 403’s (even though it checks if the account can write before actually using it, it’s possible the account can write but does not have 700 gigs of transfer left, which would put that spawned rclone process into the situation described above).
I haven’t been able to think of a reliable heuristic for this You can reduce --low-level-retries which will cause rclone to abort each transfer earlier but that isn’t a great solution as it will still try to transfer everything.
Thanks, I lowered low-level-retries to 1, not elegant, wastes API hits but seems to be working. I tried to lower it to 0 and got a stack trace back so it doesn’t look like it likes that