Fastest way to obfuscate remote content

Hi folks,

I have about 15 TB of data in gdrive but I want to encrypt it. Is it possible to do simple directory and file obfuscation remotely or would the process need to re-download everything to obfuscate it?

I would hope doing something like rclone copy cache:dir crypt:dir2 would do it remotely but just want to make sure before I waste bandwidth.

“cache:” is a cache wrapping a gdrive remote.
“crypt:” is a crypt wrapping the above cache.

Thanks!

It has to download an re-upload it - sorry. It isn’t possible to do a server side copy with an encrypt step.

It might be worth renting a VM and doing the transfer there.

Thanks @ncw, I assumed that’d be the answer :slight_smile: In any case, I’m already running all of this in a server on a DC so it’s alright, I just wanted to avoid using bw if I could.

I also figured I can rotate service accounts to upload faster using something like this:

for i in $SERVICE_ACCOUNTS
do
	rclone copy -vP --drive-impersonate xxxx@yyyy.com --drive-service-account-file /root/$i --max-transfer 700G gdrive:experimental_tv crypt:encrypted/tv
done

Now, a question about --max-transfer: I’ve read that this will simply cut the transfer up when the process has reached this transferred amount. Will a subsequent run of the copy catch files that were cut off (based on lower filesize on the remote) and re-upload them? I’m banking on this otherwise the script won’t work haha

Thanks!

Crap. It actually didn’t work. I assume it has to do with the fact that all the accounts are impersonating the same user. Any workarounds other than the team drive stuff?

It will, yes

It will. Google drive doesn’t allow partially uploaded files so rclone will start again from scratch on these cutoff files.

I settled for this script (which I wrote): https://pastebin.com/T03dFeyW

It works (as long as you don’t run it multiple times per day per service account because it’s transferring 700 gigs per account). Each account has access to the same team drive. I’d like to know if there’s a way for rclone to fully abort after a certain amount of 403’s (or something similar) because I can see the process hanging for long periods of time if one of the rclone instances for whatever reason starts running into 403’s (even though it checks if the account can write before actually using it, it’s possible the account can write but does not have 700 gigs of transfer left, which would put that spawned rclone process into the situation described above).

I haven’t been able to think of a reliable heuristic for this :frowning: You can reduce --low-level-retries which will cause rclone to abort each transfer earlier but that isn’t a great solution as it will still try to transfer everything.

Thanks, I lowered low-level-retries to 1, not elegant, wastes API hits but seems to be working. I tried to lower it to 0 and got a stack trace back so it doesn’t look like it likes that :slight_smile:

That sounds like a bug! Can you post the backtrace?