Moving from Google My Drive to Team Drive & others?

rclone & Gdrive newbie here using MacOS. I am using Gdrive as a member of a Gsuite business account. I have about ~50TB which I would like to upload and currently have about 5TB in a MyDrive folder and have come up against the 750G daily limit. I am also using crypt. I have some questions relating to the changes in my workflows which I am considering as a workaround.

(1) I am currently pointing rclone at a folder named crypt which is located in:
“My Drive > Archive > another level > crypt”. I see a number of post where people are talking about moving their rclone store to another account. I am wonder if there is some reason as to why I can’t simply drag my “crypt” folder to something like “Team Drives > Archive”. Would my then pointing rclone to “Team Drives > Archive > crypt” work? Or would something get mangled by my changing the path to the crypt folder?

(2) Once it is on a team drive, I was thinking of invoking rclone copy in a script and if the --max-transfer when I invoke rclone and if it exits with exit code 8 then reinvoke the rclone copy using a config file via --config which I created with credential for a different user. Any issues with such an approach? I am correct in thinking that for Team Drives the 750G daily limit is per users so continuing the upload with a different user’s credential would allow it to continue?

(3) I also would like to use rclone rc to control bandwidth at certain times of day via launchd. On the Mac in question I sometimes could have two copies of rclone running under two different mac user accounts. I might also want to run an rclone in a terminal shell. The different copies are talking to different machines. However, I’m wondering about remote control. As a good practice should I typically be giving any rclone task initiated by a daemon a separate rc port number so that each could be controlled independently?

(4) I am consistently getting 50Mbs to 80Mbs upload speeds with large files, i.e. video. However, for typical unix directory trees with lots of small files I am seeing 2 to 4 Mbs. Is there anyway around this on the Mac? Are there any file system containers such as encrypted .dmg or the like which would improve performance rclone copy to Gdrive?

Here’s a typical command I am using:
rclone --rc --checksum --copy-links -v --transfers=32 --checkers=16 --drive-chunk-size=16384k --drive-upload-cutoff=16384k copy /Volumes/MacHD/Users/my_home/stuff_20180615 /gdrive_crypt:/stuff_20180615

Any suggestions or advice?

That should work fine.

I think that will work in theory. I’ve had requests to build this in to rclone, but I haven’t found a neat way of doing it.

I’m sure lots of people would be interested in your script!

You’ll find that you can’t start two rclones with rc listening to the same port, so yes, you’ll need to make them different ports.

Lots of small files are the worst for google drive :frowning:

I don’t know any work-arounds for that.

1 Like

So it’s rough as ncw metioned. we are using unionfs with a merge of gdrive and a tdrive and having a program called supertransfer2 bypass the 750gb limit.

Yes, @ncw you would be a god x 10 if that happened, but see why. requires tons of jsons creations for what we do.

If interested, you can check out:

https://plexguide.com/wikis/mounts/ (check out links on top left)

https://plexguide.com/wikis/gce-feeder-edition/

With the bypass and GCE, u can get around 4-9TB a day.

[Off topic] Hmmmm looks like I have a “clone” here :laughing::laughing:

Elsewhere in these discussions someone had speculated that since .dmg files were containers they would not improve performance. That didn’t seem right to me because, I thought rclone would consider a .dmg to be a single large file. Inline below you will see two timing tests on an iPhoto library file. So lots of small files. In one case I converted the folder containing the iPhoto library into a .dmg and in the other case I simply let rclone loose on the folder.

As you can see wrapping up normal unix files in a .dmg allows the upload to Google drive to progress 15 times faster. That’s most likely an understatement. The reason I say that is because I am typically seeing 50 to 80 Mbytes/s upload with larger video file being uploaded in parallel. In the case of the .dmg I believe rclone was constrained by there being only 1 file. If there had been several .dmg files it would have presumably operated at the higher bandwidth.

I now plan on generally turning macOS folders hierarchies into ~100-200GB .dmg files before uploading with rclone. This is a little less flexible in that you can not pull down individual files from the gdrive. However, I am mostly using this for archives that I don’t expect to be pulling items from frequently and could wait for the 30 minutes or so to pull down a dmg to access the underlying file. I plan on keeping catalogs of all the files within each dmg and will probably upload those along with the dmg.

Note there is difference in storage amounts moved. I believe this is due to my turning on copying links. The dmg file effectively results in links being ignored when rclone is copying the dmg, while in the case of rclone copying the folder some data is duplicated due to their being links.

Would people see similar benefits from using .zip or .iso files?

rclone  --rc --checksum --copy-links -v --transfers=32 --checkers=16 --drive-chunk-size=16384k --drive-upload-cutoff=16384k copy /Volumes/MacHD/test_pictures_folder.dmg gdrive_crypt:/_test_dmg

Transferred:   172.989 GBytes (32.121 MBytes/s)
Errors:                 0
Checks:                 0
Transferred:            1
Elapsed time:  1h31m54.8s


rclone  --rc --checksum --copy-links -v --transfers=32 --checkers=16 --drive-chunk-size=16384k --drive-upload-cutoff=16384k copy /Volumes/Video_20171219/test_pictures_folder vworld_crypt:/_test_folder
Transferred:   190.974 GBytes (2.150 MBytes/s)
Errors:                 0
Checks:                 0
Transferred:       237456
Elapsed time:  25h16m4.7s

A nice test :smile:

I’m sure that uploading .zip files or .iso would work in the same way. They don’t have the same read/write goodness that .dmg files do I believe.

.zip files might be better though as I know that you can read zip files on the google drive web interface.

I did have the idea of making an rclone zip source dest:file.zip file which could do this for you.

I suspect rclone users would like to have the options for .zip files and there would be a benefit for most usage scenarios in being able to look inside via Google’s web interface. However, in my case all these files are going via crypt, so anticipate that the web interface is not of much use to me for crypt related files. That is other than seeing the bucket of data I have there.

1 Like

Be aware that Team Drives have a few limitations:

  • A Team Drive can contain a maximum of 400,000 files and folders.
  • A single Team Drive can nest up to 20 subfolders, but we don’t recommended creating Team Drives with a folder structure that complex.

Source: https://support.google.com/a/answer/7338880?vid=0-1042784362742-1535101828976

(I’ve met that file limit when moving files from my personal drive to a team drive, and that sucked and was a bit of an hassle to sort out when moving back)

EDIT: Oups, this is an old thread. Didn’t notice that. I’ll let the comment stand for other users stumbling in here.

Does a personal drive on a Gsuite account not have these limitations?

Personal accounts does not have team drives. Team drives are in addition to a users personal drives.

https://gsuite.google.com/learning-center/products/drive/get-started-team-drive/#!/

That’s not what I meant. You mentioned the limitations of a team drive. However as far as I know the personal drive on a Gsuite account also has unlimited data storage. So instead of storing on the team drive, you could store it on the personal drive. So my question is if these personal drives don’t have these limitations like the team drive?

The personal drive does not have the same limit (I moved more than 50000 files and hit the limit while moving to a team drive). Other than that, I don’t know.