Syncing to Backlaze B2 extremely slow unless I enable --fast-list

What is the problem you are having with rclone?

Maybe this is not a problem but instead a SOLUTION to a problem: I was plagued by extremely low upload speeds (around 100 KB/s) for several years when syncing borgbackup repositories to Backblaze B2. My problems were previously described in the (now locked) forum thread here. The underlying cause was found to be lots of 503 errors being throws by Backblaze while Backblaze support assured me that everything is OK at their end. None of the proposed solutions (e.g. increasing the --transfers value) worked.

Recently I have found that simply adding the "--fast-list" option to rclone command increased my upload speeds from around 100 KB/s to at least 2 MB/s (sometimes up to 5 MB/s). That's at least 20x speed increase ant it's absolutely consistent (tried 5 times without the --fast-list option and 5 times with it). I don't know if this is a bug in rclone or at Backblaze's end but this info might help other afflicted users.

N.B: My borgbackup repositories consist of multiple long files (up to 500MB) and many more short 17 bytes files (for snapshots that did not change compared to the previous snapshot). The typical file composition of my borgbackup repository can be seen here.

What is your rclone version (output from rclone version)


Which OS you are using and how many bits (eg Windows 7, 64 bit)

Xubuntu Linux

Which cloud storage system are you using? (eg Google Drive)

Backblaze B2

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone -vv sync /home/fuxoft/sklad_internal/backup/frantisek_borg/Lightworks b2:borgbackup-frantisek/Lightworks

The rclone config contents with secrets removed.

type = b2

#master account and key
account = e06xxxxxxxx
key = 00248xxxxxxxxxxxxxxxxxxxx

#App key for borgbackup-frantisek app - works since rclone 1.43
#key = K0023xxxxxxxxxxxxxxx
#account = 002e06xxxxxxxxxxxxxx

endpoint = 

EDIT: Fixed missing "sync" directive in the command.

fast-list makes listing a bit faster and in most cases would speed things up.

What's the full command you are running? With B2, you can usually crank up the transfers and checkers from the defaults and see what works better as well.

The full command I am running is quoted in my original post. It had no special options. Simply adding "--fast-list" to that basic command makes the upload speed at least 20x faster. I experimented with "transfers" previously (see my original forum thread linked above) but it had no noticeable effect whether I set it to "32" or "1" in my case.

I think you have a typo as that's why I asked as in the original post, there is no actual command.

rclone -vv /home/fuxoft/sklad_internal/backup/frantisek_borg/Lightworks b2:borgbackup-frantisek/Lightworks

Oops, sorry, there is "sync" missing. Fixed.

Adding --fast-list uses a single recursive directory listing (which will have 1000 files per transaction) rather than listing each directory individually. It stores those in RAM first before starting the sync.

It really helps for transfers with lots and lots of directories. However looking at your listing I don't see lots of directories (unless I'm reading it wrong) which is puzzling so I wouldn't have thought it would make a big difference.

Can you try these and see the difference?

time rclone size b2:borgbackup-frantisek/Lightworks
time rclone size --fast-list b2:borgbackup-frantisek/Lightworks

Could you do a sync (that doesn't transfer any data preferably) with the debug flags for me. So with -vv --dump headers and with and without --fast-list - something like

rclone -vv sync /home/fuxoft/sklad_internal/backup/frantisek_borg/Lightworks b2:borgbackup-frantisek/Lightworks --fast-list --dry-run --log-file with-fast-list.log --dump headers
rclone -vv sync /home/fuxoft/sklad_internal/backup/frantisek_borg/Lightworks b2:borgbackup-frantisek/Lightworks --dry-run --log-file without-fast-list.log --dump headers

That will show what transfers rclone does and should give some insight.

One idea... It might be that your disk subsystem hates checking and transferring at once. If so you could try the flag without --fast-list and see if that helps.

  --check-first            Do all the checks before starting transfers.


Total objects: 702
Total size: 42.978 MBytes (45065346 Bytes)

real	0m4.087s
user	0m0.072s
sys	0m0.019s
Total objects: 702
Total size: 42.978 MBytes (45065346 Bytes)

real	0m2.679s
user	0m0.095s
sys	0m0.005s

with-fast-list log
without-fast-list log

Maybe I should emphasize that without "--fast-list" option, the rclone -vv output is full of 503 errors and messages about rate limiting (see my original forum post linked above). With "--fast-list" option, there are none.

That isn't a dramatic difference...

$ grep -c HTTP with-fast-list.txt without-fast-list.txt 

When you do a sync normally do you sync more objects?

Looking at the original thread the messages are 503 too busy errors. There weren't any of those in the log you sent just now.

In the fast list the --checkers are effectively single threaded - I wonder if that makes a difference here. You could try a sync without --fast-list but with --checkers 1 to see what difference that makes. Normally rclone would use 8 threads to do listings - I wonder if that is what b2 doesn't like?

The 503 errors only appear when I actually upload some new data. The scripts that generated the logs above didn't involve any uploading because the local data was already in sync with the B2 stored data.

Unfortunately, I have to inform you that even when I revert back to the "raw" rclone command, without "--fast-list" and without "--checkers", and I generate new data for the borgbackup repository, the upload is currently consistently fast (5-10 MiB/s) and without hiccups. I am unable to reproduce the 100-200 KiB/s slow uploads that plagued me for several years. :frowning:

I will attempt to create a script that generates specific data that reliably reproduces the slow upload problem and then I'll start a new thread. Sorry about wasting your time :frowning:

Let me know if it continues that way.

I would use --fast-list if you've got enough memory to keep all the objects in memory. It saves transactions which will save you money and will run faster.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.