Rclone crashes macOS on local transfers

macOS Ventura 13.1, anytime I use rclone to push a local file from a mac to a mounted SMB volume, it works for about 1-2 minutes at full speed then starts counting up until the laptop just reboots itself.

rclone debug just shows the files going out, then thats it. Ends there.

which rclone
/opt/homebrew/bin/rclone
rclone --version
rclone v1.61.1
- os/version: darwin 13.1 (64 bit)
- os/kernel: 22.2.0 (arm64)
- os/type: darwin
- os/arch: arm64
- go/version: go1.19.4
- go/linking: dynamic
- go/tags: none

rclone copy -i ~/Desktop/temp_photo/ /Volumes/Illmatic/

Crash logs show a WindowsServer

Any ideas where to move from here?

I uninstalled homebrew rclone, installed rclone directly, same result. Transfer works as expected for a few minutes, then it starts freaking out, everything gets extremely laggy and windows server starts eating all the CPU/threads. Leave it long enough and it black screens, eventually the machine reboots.

Check the ram usage. Is rclone using too much ram?

It doesn't appear to be. Things run smooth for about 5-7 minutes, then it just locks up until the client restarts or you force a power cycle. I can hear the drives on the mounted volume start making noise right as everything locks up.

I am pushing to a mounted SMB volume over the OS with local paths for the data. Not sure if that is a variable. But I've crashed a Mac Mini M1 and the 14" MBP M1 Max a few dozen times while pushing data over 10gbe network.

I don’t even have macFUSE installed, but since upgrading to Ventura, Finder has been crashing frequently when any network filesystem is mounted.

Mounting with option nobrowse helps sometimes, but I think it would be safest to add the SMB volume as an rclone remote so it doesn’t need to be mounted (or can be mounted read-only)

That would be a pretty big bug considering SMB is their official choice. Wouldn't surprise me I guess. I can't recall if I had the issue in Monterey, but I know I've used rclone to push a lot of video production media around.

I haven't been able to use rclone with the config setup with an smb remote on a Mac. Either I am bad at searching or Google isn't what it used to be. But the configuration isn't very straightforward.

When I have an SMB remote configured, it just returns Failed to create file system for "remote:test": didn't find section in config file

I pushed the same set of files over a locally mounted apfs volume it got sluggish in the middle of a 1TB transfer, but it's fine overall it seems. At least its further than any network share has gotten in the last two days. But it did complete without crashing the machine. So maybe it is SMB in Ventura. I wish I had a Monterey machine around to test it with.

2023/01/16 21:06:57 NOTICE:
Transferred:   	 1021.036 GiB / 1021.036 GiB, 100%, 410.034 MiB/s, ETA 0s
Transferred:        42163 / 42163, 100%
Elapsed time:     41m46.0s

In general rclone shouldn't be able to crash your machine, and if it can that is a bug in the OS.

There are some caveats with that statement though

  • if rclone uses up too much memory then it can make machines behave oddly
  • using macFUSE which has its fingers in the kernel can cause OS crashes

So I'm going to say bug in the OS, SMB or network driver.

I'd see if I could reproduce it with cp and if so report it as a bug.

You should be able to use the config wizard rclone config. What did you call the remote? Did you call it remote or something else? rclone config listremotes will show you what you called your remotes.

an smb remote on a Mac

is the smb server/remote also a mac? if so you might need to add a new user on the server (tick ‘sharing only’), enable that user under file sharing > advanced options, and use that user in the rclone config wizard

Failed to create file system for "remote:test": didn't find section in config file

I had this today; there was a rogue : somewhere in my config file

it got sluggish in the middle of a 1TB transfer

you could always do rclone serve webdav on the server and connect to that with -vvv to troubleshoot

alternatively I’ve seen ChronoSync recommended a lot as as an antidote; it claims to be lots faster than the default setup for mac <-> mac transfers

What is the hostname supposed to be for the remore? the documentation shows localhost.

I am using TruenNAS / ZFS on the remote server over SMB connection.

smb://username@192.168.11.10/VolumeName/ is how I typically connect.

I was able to send to an offsite remote and local / usb disk without crashing the OS.

-vv / logs doesnt really seem to show anything. The files are moving, then it just stops. rclone doesnt appear to be consuming a ton of RAM. But once things start to hang, I can't see results anymore. The client gets REALLY sluggish and unresponsive until it goes dark.

I have access to macOS Monterey today, so I will try that next.

Well, macOS 12.6.2, rclone with local volume paths rclone copy -i /Volume/pathfrom /Volume/pathto -P seems to work. I transferred 256gb at about 450mb/s over 10gbe.

Pushing 1TB now to see if it holds up.

I'm not sure I follow on the webdav suggestion though.

#toosoon

Crashed on the next pass.

Crash report when macOS rebooted

No, the remote is a TrueNAS / ZFS server with an SMB share.

Hi William,

I haven't read in detail and is mostly on Windows, but perhaps there is a part of your setup that doesn't play well with the high level of concurrency used by rclone.

You can try reducing rclone concurrency by adding --checkers=1 --transfers=1 to your command and see if that has a stabilizing effect.

Since SMB is mounted, why don't you just use the OS default cp.

I am experiencing the same symptoms as @wdp, also with macOS Ventura 13.1 (Intel) and rclone v1.61.1.

But, in my case it is not with a SMB mount, but with an external disk connected by Thunerbolt.

Neither ram nor processor usage is high. It just seems that everything freezes until the computer becomes unresponsive.

In any case, going back to version v1.59.2 solves the problem.

If that is the case then this is probably something to do with cgofuse / macfuse. Exactly what I don't know so if you can get some logs out that would be useful.

I have the same problems when syncing to my NAS via SMB
I experimented with the number of transfers & checkers, this seems to help a bit.
But is depends on how much data I have to sync.
Perhaps a pause after each x files will fix it? I have no clue.