--transfers vs --checker ratio


#1

What should be the ratio between those. ( especially for Amazon Drive )
I see by default is 4 transfer and 8 checkers.


#2

That 4 & 8 is an Ok default. You’ll have to experiment to see what suits you best. You might find you want more transfers if you tend to transfer lots of small file.


#3

atm using --transfers=45 --checkers=50
Iam transfering video files mostly from 1.5GB to 25GB + 1 or 2 subs per video. (encfs encrypted )

Current stats, 90% transferred with rclone. (before I was using acd_cli upload / still using acd_cli for mounting drive,


#4

Hello Ajki, everyone,

I’m using a similar setup (encrypted ACD remote) and have transferred similar-sized files with no issues using the default parameters, maxing my bandwidth here (about a dozen MB/s).

Right now, I’m having problems transferring a large directory tree of mostly small files (actually, my main system backup): I can’t seem to transfer more than about 3 files per second, even very small files (a few KBs each), and I can’t get past about 500KB/s. I’ve enlarged the number of checkers and transfers up to 128 each, but no look; some cursory analysis seems to show that Amazon is doing some kind of throttling.

Anyone else here doing anything similar, care to share your experiences?

Cheers,

Durval.


#5

I wouldn’t be surprised. I know that google drive limit transfers to about 2 per second.


#6

your comment for 2-3 transfer per second is for Google drive. Do we have similar limits for GCS ( using google buckets)? Is there a way to batch transmission of small files to reduce the number of transactions?


#7

No, since you pay for transactions, you can go as fast as you like with GCS.

There isn’t in google drive. I haven’t investigated for GCS.


#8

Did some investigation
https://cloud.google.com/storage/quotas

One could use object composition which allows 32 objects can be composed in a single composition request. This may be a valuable enhancement for small files.


#9

Interesting, didn’t know about object composition. Will think upon that!


#10

Thanks. I can help contributing with some help from you.


#11

I’m not sure how you would fit batched uploads into rclone. I think the drivers would have to declare they supported batch upload and the higher levels would have to make the batches.


#12

Yes. For example google cloud-storage should declare that they have a batch upload support. When we do the rclone config and choose a specific storage, we could ask this question to query the support for batched uploads. Regarding making the batches, we can probably have a simple approach of taking the the batch size as a config parameter. During the file upload process we keep putting thing in a bucket till we reach the batch size. Once we reach the size we upload the batch.


#13

It seems I got it wrong. It is the other way round. It is meant for splitting large file into smaller chunks , uploading chunks in parallel and then compose at the destination (GCS).