Hey, not sure if this is the proper forum but saw people talking about this stuff so I'd like to ask it here.
I have about 30TB of media data, broken in about 5-20gb each file. I need to upload all of it to my gdrive. But there is this 750gb upload limit per day. I saw a video from linux techtips about them uploading their large data to their google drive to bypass this.
So it got me wonder how did they do it and how can I apply it to my scenario? What I formulated is that they used mergerfs + rclone.
With the following file structure:
Then the final folder for MergerFS_Folder
All mounted google drive folders are mounted using rclone pointing to one team drive.
The process will be:
- Move files into MergerFS_Folder
- Execute a script with 4 commands for rclone that would copy all files from Local_Folder to Gdrive_user1, Gdrive_user2, Gdrive_user3 and Gdrive_user4.
Question, will it work? and:
- What if 1 instance hits the 750gb daily limit?
- Will it simultaneously copy files and cause conflicts?
- Will it just copy the same files and hit 750gb daily limit?
Note: I'm not l33t so people dumb it down a bit. But I'm willing to research whatever you say here. thanks in advance.
Feel free to suggest something else if you have any other suggestion.