rclone & Gdrive newbie here using MacOS. I am using Gdrive as a member of a Gsuite business account. I have about ~50TB which I would like to upload and currently have about 5TB in a MyDrive folder and have come up against the 750G daily limit. I am also using crypt. I have some questions relating to the changes in my workflows which I am considering as a workaround.
(1) I am currently pointing rclone at a folder named crypt which is located in:
“My Drive > Archive > another level > crypt”. I see a number of post where people are talking about moving their rclone store to another account. I am wonder if there is some reason as to why I can’t simply drag my “crypt” folder to something like “Team Drives > Archive”. Would my then pointing rclone to “Team Drives > Archive > crypt” work? Or would something get mangled by my changing the path to the crypt folder?
(2) Once it is on a team drive, I was thinking of invoking rclone copy in a script and if the --max-transfer when I invoke rclone and if it exits with exit code 8 then reinvoke the rclone copy using a config file via --config which I created with credential for a different user. Any issues with such an approach? I am correct in thinking that for Team Drives the 750G daily limit is per users so continuing the upload with a different user’s credential would allow it to continue?
(3) I also would like to use rclone rc to control bandwidth at certain times of day via launchd. On the Mac in question I sometimes could have two copies of rclone running under two different mac user accounts. I might also want to run an rclone in a terminal shell. The different copies are talking to different machines. However, I’m wondering about remote control. As a good practice should I typically be giving any rclone task initiated by a daemon a separate rc port number so that each could be controlled independently?
(4) I am consistently getting 50Mbs to 80Mbs upload speeds with large files, i.e. video. However, for typical unix directory trees with lots of small files I am seeing 2 to 4 Mbs. Is there anyway around this on the Mac? Are there any file system containers such as encrypted .dmg or the like which would improve performance rclone copy to Gdrive?
Here’s a typical command I am using:
rclone --rc --checksum --copy-links -v --transfers=32 --checkers=16 --drive-chunk-size=16384k --drive-upload-cutoff=16384k copy /Volumes/MacHD/Users/my_home/stuff_20180615 /gdrive_crypt:/stuff_20180615
So it’s rough as ncw metioned. we are using unionfs with a merge of gdrive and a tdrive and having a program called supertransfer2 bypass the 750gb limit.
Yes, @ncw you would be a god x 10 if that happened, but see why. requires tons of jsons creations for what we do.
Elsewhere in these discussions someone had speculated that since .dmg files were containers they would not improve performance. That didn’t seem right to me because, I thought rclone would consider a .dmg to be a single large file. Inline below you will see two timing tests on an iPhoto library file. So lots of small files. In one case I converted the folder containing the iPhoto library into a .dmg and in the other case I simply let rclone loose on the folder.
As you can see wrapping up normal unix files in a .dmg allows the upload to Google drive to progress 15 times faster. That’s most likely an understatement. The reason I say that is because I am typically seeing 50 to 80 Mbytes/s upload with larger video file being uploaded in parallel. In the case of the .dmg I believe rclone was constrained by there being only 1 file. If there had been several .dmg files it would have presumably operated at the higher bandwidth.
I now plan on generally turning macOS folders hierarchies into ~100-200GB .dmg files before uploading with rclone. This is a little less flexible in that you can not pull down individual files from the gdrive. However, I am mostly using this for archives that I don’t expect to be pulling items from frequently and could wait for the 30 minutes or so to pull down a dmg to access the underlying file. I plan on keeping catalogs of all the files within each dmg and will probably upload those along with the dmg.
Note there is difference in storage amounts moved. I believe this is due to my turning on copying links. The dmg file effectively results in links being ignored when rclone is copying the dmg, while in the case of rclone copying the folder some data is duplicated due to their being links.
Would people see similar benefits from using .zip or .iso files?
I suspect rclone users would like to have the options for .zip files and there would be a benefit for most usage scenarios in being able to look inside via Google’s web interface. However, in my case all these files are going via crypt, so anticipate that the web interface is not of much use to me for crypt related files. That is other than seeing the bucket of data I have there.
(I’ve met that file limit when moving files from my personal drive to a team drive, and that sucked and was a bit of an hassle to sort out when moving back)
EDIT: Oups, this is an old thread. Didn’t notice that. I’ll let the comment stand for other users stumbling in here.
That’s not what I meant. You mentioned the limitations of a team drive. However as far as I know the personal drive on a Gsuite account also has unlimited data storage. So instead of storing on the team drive, you could store it on the personal drive. So my question is if these personal drives don’t have these limitations like the team drive?
The personal drive does not have the same limit (I moved more than 50000 files and hit the limit while moving to a team drive). Other than that, I don’t know.