Hey guys,
I'm to this forum so I apologize if this post is not in the right place or poorly formatted. I'm not new to rclone though and have used it on my seedboxes to transfer data for years. Anyway I'm working on transferring around 100tb of data from my google drive to a new unraid server, and while I have it going smoothly , I'm wondering if there's anything I can do to make it copy faster or avoid google api bans or timeouts. I need to get this data off of google drive as fast as possible as they will soon be deleting my files.
I do have the google drive configured with my own client ID and secret. I'm wanting to copy around 100tb of media, all organized into folders with each file being anywhere from 400mb-10gb, with some files reaching up to around 20gb but most around a gb or two.
I'm not sure if in this case drive chunk size is a flag that would be helpful or what would be ideal for that. I've also read on the rclone documentation that these flags :
--buffer-size SizeSuffix
--checkers int
--transfers int
can help to increase performance but not sure what the ideal values would be in my case. Can anyone recommend what would be the ideal flags and values to use for my use case ? Thank you all so much.