Google drive and optimal --drive-chunk-size

I’ve got an Ubuntu server with 16 GB RAM so I have plenty to spare. I am going to play around with different --drive-chunk-size settings but thought I would see if others here had done any tests and come to any conclusions.

4 Likes

Hello @imthenachoman,

I’m also interested in this; haven’t had the time to play around with --drive-chunk-size (or any other parameters) yet, but I’m a heavy user of GDrive and so I will benefit from your (and others) findings, please keep us posted.

On a side note, I think that --drive-chunk-size would impact just sequential data I/O (ie, large files being copied) – and the default rclone settings (ie, not indicating a explicit value for --drive-chunk-size and others) is already entirely capable of saturating my 100Mbps internet connection.

IME, what sucks big time with GDrive is “metadata” performance, specially when you are copying a whole chunk of small files to it (a few million of them can take many days as Google seems to be limiting file creation to at most 3 per second. A way to optimize this would be even more welcome.

Cheers, and good luck,

Durval.

1 Like

If you look at the original issue #397 you’ll see some test timings with drive-chunk-size.

Basically the bigger the better - rclones default of 8MB is a compromise between memory use and speed. Thought the larger chunks you use the more data you have to retry if things go wrong.

2 Likes

Hi @ncw,

Thanks for chiming in. Just a clarification: --drive-chunk-size will have little-to-no effect on GDrive performance when uploading lots of small files, right?

Cheers,

Durval.

Alas no, it is only for big files.

Thanks for the confirmation, @ncw.

Thank you! I will try playing with that setting.

Will this improve reproduction? When playing from Google sometimes stop. And after pause and play it works sometimes. Others I need to stop and start from last point and it works better

Thanks for linking that You guys gave an awesome explanation for understanding the cost/benefit of increasing chunk size.

Personally I found --drive-chunk-size 256M was the optimal setting when uploading multiple larger files (50GB+). I can now saturate my 1000Mbps upload connection, though with a max of 80 M/s. pr file.

3 Likes

Hello @linckez,

Personally I found --drive-chunk-size 256M was the optimal setting when uploading multiple larger files (50GB+).

I can now second that. Finally (after almost 3 years!) I got the incentive to try it, after an rclone transfer I run almost everyday which used to run at ~5MB/s (limited by its internet connection of 50Mbps) suddenly started to crawl at 0.25MB/s (seriously).

Tweaking the --user-agent parameter, which was what I used to do in these situations (and used to solve it as Google is apparently throttling traffic for certain user agents) did not solve it this time. So I remembered this, interrupted the running rclone with ^C, and reran the exact same command but with a --drive-chunk-size 256M tacked at the end.

With just that change, speed went from those lousy 0.25MB/s back to the 5MB/s range -- and I'm happy again :slight_smile:

PS: my files here are in the ~300MB-1.5GB range, so it seems --drive-chunk-size 256M works great even for files significantly smaller than the +50GB you reported.

Cheers,
-- Durval.

2 Likes