Most efficient way to upload a single large file to Google Drive?

I want to upload a single file that is huge in filesize using a fast connection (1 Gbit/s) as quickly as possible to Google Drive. What is the best way to do this?

I thought I would set drive-chunk-size to 1024M but then the following happens:

  1. rclone reads/buffers 1024MB of the file from disk to RAM.
  2. rclone stops reading/buffering the file.
  3. rclone uploads the 1024MB chunk to Google Drive.
  4. Once the upload is finished, rclone releases the 1024MB buffer from RAM.
  5. rclone starts over at step 1 until the entire file is uploaded.

The result is that approximately half of the time the file is being read from disk and the other half of the time is being uploaded, essentially doubling the total upload time.

So I guess I’m looking for rclone to “read/buffer ahead”, that is buffer the next 1024M chuck to memory while the previous one is still uploading (or multi-threaded uploading but I don’t think that’s possible with most cloud storage providers).

Would something like that be possible with rclone. If not are there alternative that do something like that?

Thanks!

Try upping the --buffer-size parameter that will do read ahead for you.

I can’t remember whether the protocol can do multiple chunks at once - I think it can and the code can nearly do that too.

1 Like

Thank you! I totally missed that parameter and it does exactly what I need, awesome! :slight_smile:

Thanks ncw for this great piece of software and your awesome support.

1 Like

An update on the progress of uploading a single large file to Google Drive.

Unfortunately the upload speed of a single large file drops considerably after a while. I found that initially I reached upload speeds of 500-800Mbps on average, but already after an hour or so speeds would have dropped to around 200Mbps average. A whole day into the upload the speed dropped even more to around 125Mbps and when using the Windows task manager I can see that the upload speed now is also very inconsistent as the graph has many spikes, not just between chunks (when a new chunk upload starts) but also during chunks speed fluctuates. I’m currently in the process of uploading a single large file for the second time and the results so far have been the same on both occasions.

I uploaded with: --drive-chunk-size 2048M --buffer-size 8192M and the hard drive, CPU or connection are definitely not the bottleneck here.

@ncw you mentioned that the code is almost ready for multi-threaded file uploads, so will it be in the beta soon? Because I would love to try it out. I also checked the Drive API documentation on multipart uploads and I don’t see anything about uploading the parts in order, so I hope that means you can use any order and multi-threaded chunk uploading is in fact supported by Google Drive.

I’m also wondering about resumable uploads, I found some discussion in the Github issues that mentioned Google Drive but that quickly changed to Amazon Drive, do you still plan on implementing resumable uploads? As I guess that would mean rclone needs to store upload status in temporary files somewhere, would that fit its workflow? I must say that it would be nice if I could reboot my system for example and then let rclone continue where it left off after the reboot. And who knows, maybe that would give me a (temporary) upload speed boost as well.