I'm doing a large sync from my NAS to B2 - it's an initial upload of around 3,000 folders, containing around 500,000 images (total of about 3.5TB). My command-lline is:
This is using v1.52.3, on a Synology NAS. The problem I'm having is managing the transaction usage. I've tried using --fast-list to reduce transactions. However, I'm monitoring bandwidth usage and it's pulling 2-2.5MBps, but after about 30-40 minutes it drops off as if nothing's being copied.
If I kill the process and restart, it immediately starts copying again, so I'm presuming that the drop in bandwidth is due to the process running out of memory (per the fast-list docs). This may be because of the high number of directories, or large numbers of files in the directories (there's a couple of folders with 10k and 23k files respectively).
I'm trying again without --fast-list, to see if that solves it, but monitoring the transaction costs closely in case they start getting out of hand.
I guess my question is, are there any other strategies/tips to improve the way this'll work, or optimise for such a large upload? Should I consider using the cache backend? Will it help?
Yes, I was asked for that information, and included all of those items (except the log) in my post.
I don't want to use the default value of 4 for transfers, because B2 and rclone recommend 32 as the optimum. But I don't think the number of transfers is the issue here.
I'll try again and see if there's anything in the debug log, but this post was more asking whether anyone has best-practice settings for this sort of upload.
Thanks, I read them earlier today. Appreciate the feedback. no-traverse won't work, as this is a very large upload. But I think I'm good now, it's churning along okay.