What is the problem you are having with rclone?
Rclone keeps having "fatal error: concurrent map writes" errors with rclone sync
.
This is the second time in a row it has happened, but due to unknown errors with my terminal my first saved output log had the traceback entirely cut off and is unfortunately not useful. The first command was almost identical except for being copy
instead of sync
, and not having --check-first
, and having --fast-list
.
This post talks about the second failure which I have the full output of.
Related and additionally, but I do not have full logs for, I have encountered this same error with rclone mount
, where it fails seemingly randomly after many days of uptime. Because I don't have full logs and it doesn't happen as often as with rclone sync
it is not the focus of this post, but just additional context.
Run the command 'rclone version' and share the full output of the command.
rclone v1.69.0
- os/version: arch (64 bit)
- os/kernel: 6.12.8-arch1-1 (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.23.4
- go/linking: static
- go/tags: none
Which cloud storage system are you using? (eg Google Drive)
Backblaze B2
The command you were trying to run (eg rclone copy /tmp remote:tmp
)
rclone sync --check-first --progress --metadata --links --transfers 30 --checkers 30 'crypt-on-b2-bucket:Path/To/Copy/' /Path/to/Dest/ --exclude "/Folder1/**" --exclude "/Folder2/**" -exclude "/Folder3/**
The destination is a local USB hard drive. I don't expect it to matter, but the external drive is using ZFS.
The rclone config contents with secrets removed.
[b2]
type = b2
download_url = <cloudflare download proxy for free egress>
[crypt-on-b2-bucket]
type = crypt
remote = b2:b2-bucket
password = <redact>
password2 = <redact>
The B2 account and key are provided to the command via environment variables, shown below.
RCLONE_BWLIMIT=40M:off
RCLONE_TRANSFERS=32
RCLONE_B2_ACCOUNT=<redact>
RCLONE_B2_KEY=<redact>
A log from the command with the -vv
flag
I have redacted personal information in paths. Structure has been preserved and paths with the same name have the same layout in the source. Unique names are unique directories, and the same names are the same directories.
The command was not initially run with -vv
and can/will not be run again with it because:
1: it takes a full 7 hours to start, and even without --check-first
it takes anywhere from 30 minutes to an hour just to start, only to almost immediately fail. Which I will say was not pleasant to wake up to, I was expecting to wake up to what I knew would be a long multi-day transfer going merrily along.
Transferred: 0 B / 5.841 TiB, 0%, 0 B/s, ETA -
Checks: 56138 / 56138, 100%
Transferred: 0 / 3591442, 0%
Elapsed time: 6h42m48.0s
Transferred: 2.958 GiB / 5.909 TiB, 0%, 26.154 MiB/s, ETA 2d17h46m
Checks: 56138 / 56138, 100%
Transferred: 229 / 3735359, 0%
Elapsed time: 7h4m26.0s
Transferring:
2: This command by itself costs over $10 USD in Backblaze Class C API call fees. Specifically, roughly 2,698,463
of them, almost all to b2_list_file_names
. Even without --check-first
, which in hindsight was ill-advised to use, it costs about $5 USD, and only because it fails relatively quick and presumably would use roughly the same total calls if it actually finished running.
I am simply not going to run it again knowing it will fail, with the time and monetary cost involved. I very simply cannot afford to keep running it when it won't work.
I'm trying to use rclone as part of a plan to reduce my cloud costs in the first place, by reducing and consolidating the absurd number of mostly small files at these paths leading to so much b2_list_file_names
.(Download to fast local disk -> operate -> sync fixed, consolidated data)
Another, possibly related, issue is the NOTICE rclone sync displays
2025/01/23 20:04:40 NOTICE: FolderA/FolderB/FolderC: Duplicate directory found in source - ignoring
2025/01/23 20:04:40 NOTICE: FolderA/FolderB/FolderD: Duplicate directory found in source - ignoring
2025/01/23 20:04:40 NOTICE: FolderA/FolderB/FolderE: Duplicate directory found in source - ignoring
2025/01/23 20:04:40 NOTICE: FolderA/FolderB/FolderF: Duplicate directory found in source - ignoring
2025/01/23 20:04:40 NOTICE: FolderA/FolderB/FolderG: Duplicate directory found in source - ignoring
Key facts:
- The source remote is crypt, on top of a B2 bucket.
- This source cannot have duplicates. https://rclone.org/overview/ "Duplicate Files -> No"
- This source does not have directories at all.
- running
rclone dedupe
anyway helpfully warned "Can't have duplicate names here. Perhaps you wanted --by-hash ? Continuing anyway." and then, after finishing, reported nothing, because it is literally impossible for duplicates to exist. - I could find no duplicates manually with any of the
rclone ls
commands, because they obviously do not and cannot exist. rclone sync
believes duplicates exist despite all these facts.