Multiple G-Drive Data Management

I did some searching around but can't seem to find an all-inclusive answer. I'd like to explain what I'm doing, trying to do, and my understandings of it all, and would very much appreciate if someone could provide some clarity/confirmation and sanity checking. I currently have one encrypted G-Drive, working flawlessly. What I'd like to do is mount about 3-5 more and back-up my data to each one for redundancy. From what I understand, because the data is encrypted, a server-side transfer wouldn't work (and if the 100GB server-side limit is true, it wouldn't be worth it anyway. So I understand the flow of the data would be something like G-Crypt1 > Machine running the rclone command > GCrypt2. From what I found, the command seems to be a simple copy from one remote to another.

How much data is being stored locally during this download and re-upload process? I'd imagine it's doing it in chunks, because the alternative would be nuts. Is the amount configurable with a specific command switch?

In regards to the encryption, are there any caveats? Does it decrypt upon downloading and encrypt again when uploading to another drive, all automatically? Do I need to use the same encryption passwords for the secondary G-Crypts?

Lastly, if anyone is doing the same thing I'm trying to, is there a more efficient way than a simple rclone copy? Initially, I'm planning to write a script that automates the copies from remote to remote.

I'm really enjoying rclone so far, and really appreciate you if you took the time to read all this. Thank you!

A server side transfer on crypted data works fine - as long as you don't decrypt it in the process.
While you normally access your files via a crypt-remote to be able to see them - to server-transfer you should use a normal (non-crypt) remote to grab the files from and then transfer them to another non-crypt remote on the backup location.

This will result in the files not being decrypted or recrypted (they will stay crypted as they were where they were read). As long as files do not need to change during the transfer they can be server-side transferred just fine.

Example:
rclone copy Gcrypt1: Gcrypt2: <----- this will not work server-side as it will decrypt (locally) then recrypt (locally) and then send the data out to destination.
rclone copy Gdrive1:/cryptfolder Gdrive2:/cryptfolder <---- This will work fine, even if the files in "cryptfolder" happen to encrypted.

The only downside to this is that the crypt-key must remain the same between both location. If you MUST have the same data backuped under a different crypt key then there is no way around the fact that you need to pipe it though your local system and re-upload (although a GCP virtual server can do this fast and for free if you have limited bandwith yourself). This is generally the best way to move large amounts of data between Google systems when they MUST be processed on the way.

The limits for server side copying is 750GB/day. The 100GB/day you've heard is either false or outdated. I've never seen it. I think it was spawned from a misunderstanding and a bug in rclone's server-side function that got fixed quite a while ago. 750GB/day is what you can upload pr. user - no matter if it is from local or via server-transfer. The limit is shared (not 750GB server-side + 750GB from local). The speed is quite extreme pr file - but still limits by files-pr-second limits, so large files transfer in the GB/sec range, but very many very small files will still take a while.

For a remote1 --> local --> remote2 transfer, no data is stored locally. (not unless you involve some kinda of deliberate caching system at least - which you should not in that case). The data is simply pied in - through CPU - then out. The only data stored will be temporarily in RAM.

  • You can't easily use Google's link-sharing to share encrypted files (becuase to read them you'd have to share your crypt key, which you probably don't want to do).
  • You can't use Google Drive's searching because all the names will appear in crypted formated and thus unreadable (assuming you use name encryption and not just content encryption that is).
  • Checksumming between a crypted location and a non-crypted one can't really be done because the server only hashes the crypted file - and the crypted files hash won't be the same as the non-crypted hash. This is planned to be remedied with a second iteration of crypt where unencrypted hash is stored in the crypt-file header though... but that's still in the planning stage. Generally the other safety-mechanisms are more than enough to ensure error-safe transport however, so it's not a big issue in practice. (hashing between crypt->crypt with same keys still work fine though).

I think that about covers the downsides.

yes

If you want to be able to server-transfer then yes. Otherwise, no.

Feel free to ask followups if I missed something.
Oh and welcome to the forum :smiley:

1 Like

Wow, you answered everything perfectly. That's everything I needed, all T's crossed. Thanks a bunch, really appreciate it!

No problem my man.
If you are looking for any scripts - or simply want me to look over what you write to suggest improvements then feel free to ask :slight_smile:

There are a lot of little less-than-obvious things you can do to majorly optimize things like what you are talking about, and new users are often not aware of the possibilities.

Upping the chunk size for example can very greatly improve upload speeds at the cost of some RAM, and if you use rename-tracking then you can avoid having to re-upload data to your backup sites simply because data got moved around. I will elaborate on this and more if you are open to learn :slight_smile:

1 Like

I think I'll take a crack at a script first, and then send it your way for thoughts and improvements if you're up for it. I try my best not to be a total leech, lol.

But I'll definitely take you up on the chunk sizing, renaming, and any other optimizations you can think to elaborate on. I'd love to incorporate as much as I can into the script.

Again, I really appreciate it. Being a brand-new user, I definitely wasn't expecting this kind of community support. Thanks again!

1 Like

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.