Idea to speed up GDrive xfers

ncw,
Thank you so much for working on this!

I have been churning around in my head the idea of bringing lots of files together, or even dividing up large ones (vm HDDs, for example) so that all the files to copy are ‘chunked’ in a way that transfers a large collection of smaller files as a large one.

It is easy to tell if a single file has changed, so rsync & rclone have no trouble-providing you are able to tolerate the 2 files/second limit. I have great speeds backing up VMs, for example, once I’ve 7z’d them into 2GB chunks (One Drive has a 2GB/file limit, so I do it for either).

I don’t have a solution, but is it possible for those smarter than I to combine many files into a single one (predetermined size), with an index, or sorts, that has what rclone would need in the way of breaking the parts back out for comparison operations? Even if not compressed…

So let’s say I have 100 files, each 100MB in size, for a total of 10GB. Each has it’s own CRC, mod time, size, etc. I can’t transfer the lot any faster than 2/second, or 50 seconds (optimally, I know).

Now imagine I have set a target chunk of 2GB for each transfer, which means I can fit 20 individual files into a single chunk (2GB chunk), which will transfer much more efficiently. The effect is much more dramatic if the files are much smaller, as in 200 files, each 10MB.

So I’m trying to think of a way to represent lots of little files in a single large file, leaving the option of an associated ‘index’ file open for discussion.

Thoughts?