Strategies for large B2 syncs and transaction handling

I'm doing a large sync from my NAS to B2 - it's an initial upload of around 3,000 folders, containing around 500,000 images (total of about 3.5TB). My command-lline is:

rclone copy /volume1/photo bb:my-photos/Photo --transfers 32 --exclude-from=rclone_exclude.txt --log-file=/var/services/homes/admin/rcloneb2.log --config=/volume1/homes/admin/.rclone.conf

This is using v1.52.3, on a Synology NAS. The problem I'm having is managing the transaction usage. I've tried using --fast-list to reduce transactions. However, I'm monitoring bandwidth usage and it's pulling 2-2.5MBps, but after about 30-40 minutes it drops off as if nothing's being copied.

If I kill the process and restart, it immediately starts copying again, so I'm presuming that the drop in bandwidth is due to the process running out of memory (per the fast-list docs). This may be because of the high number of directories, or large numbers of files in the directories (there's a couple of folders with 10k and 23k files respectively).

I'm trying again without --fast-list, to see if that solves it, but monitoring the transaction costs closely in case they start getting out of hand.

I guess my question is, are there any other strategies/tips to improve the way this'll work, or optimise for such a large upload? Should I consider using the cache backend? Will it help?

when the problem starts, you need to look at the debug log.
perhaps remove --transfers and see that happens with the default value of 4

when you posted, you should have been ask for information?

What is the problem you are having with rclone?

What is your rclone version (output from rclone version)

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Which cloud storage system are you using? (eg Google Drive)

The command you were trying to run (eg rclone copy /tmp remote:tmp)

Paste command here

The rclone config contents with secrets removed.

Paste config here

A log from the command with the -vv flag

Paste  log here

Yes, I was asked for that information, and included all of those items (except the log) in my post.

I don't want to use the default value of 4 for transfers, because B2 and rclone recommend 32 as the optimum. But I don't think the number of transfers is the issue here.

I'll try again and see if there's anything in the debug log, but this post was more asking whether anyone has best-practice settings for this sort of upload.

sorry, i thought you had a problem, as you posted this.
to understand what is happening, a debug log at that point in time, would be helpful.

so i thought you did not know how to check ram usage on synology box or what is the exact problem
"the destination is not listed minimising the API calls"

1 Like

Oh, nice - no-check-dest looks perfect for the initial upload. Will give it a go, thanks!

and perhaps this

1 Like


take a few minutes to read/scan these two pages, then you will know what is possible.

Thanks, I read them earlier today. Appreciate the feedback. no-traverse won't work, as this is a very large upload. But I think I'm good now, it's churning along okay.

Thanks for the help!

sure, glad to help solve your problem.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.