Rclone flags for fastest, smoothest copying of ~100tb google drive

Hey guys,

I'm to this forum so I apologize if this post is not in the right place or poorly formatted. I'm not new to rclone though and have used it on my seedboxes to transfer data for years. Anyway I'm working on transferring around 100tb of data from my google drive to a new unraid server, and while I have it going smoothly , I'm wondering if there's anything I can do to make it copy faster or avoid google api bans or timeouts. I need to get this data off of google drive as fast as possible as they will soon be deleting my files.

I do have the google drive configured with my own client ID and secret. I'm wanting to copy around 100tb of media, all organized into folders with each file being anywhere from 400mb-10gb, with some files reaching up to around 20gb but most around a gb or two.

I'm not sure if in this case drive chunk size is a flag that would be helpful or what would be ideal for that. I've also read on the rclone documentation that these flags :

--buffer-size SizeSuffix
--checkers int
--transfers int

can help to increase performance but not sure what the ideal values would be in my case. Can anyone recommend what would be the ideal flags and values to use for my use case ? Thank you all so much.

When I did my Google Drive to Dropbox migration, I did nothing but defaults and let it run. Generally, that maxed out my gigabit line and you can only download about 10TB per day so that worked perfect for me.

Well that's reassuring, I wasn't sure if I needed to tweak all of these things to get it to max speed or prevent google from limiting my transfers but if it can run as is and get 10tb a day maxed out then that makes it easier on me haha.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.