Rclone (v1.46) using millions of b2_list_file_names

I am a newbie with rclone. Have a question about sync to b2.

We're syncing about 6 million files and everything seems to work fine, except this: we're using up millions of class-c api calls, or really just this one: b2_list_file_names. For this month up ' till now it's at 36,530,810

I have tried --fast-list and that seems to do a whole lot better but the sync crashes: fatal error: out of memory

I think I'm doing something really really wrong but I don't know what.

What's the memory on the system?

It is a tradeoff... Either you don't use --fast-list and rclone does an API call for each directory, or you do use --fast-list and rclone has to buffer the 6million objects in memory before the sync starts.

6M objects will probably take 3G-6G of RAM.

one thing you can do is break the sync up so if you have 5 top level directories, sync each of those individually with --fast-list.

server has 16GB ram.

you say: "rclone does an API call for each directory". That would explain the amount of api calls, but does it do a b2_list_file_names call for every directory? My understanding is that is would make such a call for every 1000 files, so about 6000 per sync instead of 1.6 million per sync (36.5 million up to now in 22 syncs)

Yes, unless you use --fast-list

That is correct for --fast-list.

Ok, that's clear then. Thank you ncw for the help.

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.