OSX rclone closes itself repeatedly

Running rclone v. 1.49.3 on OSX (Catalina)

I mount my gdrive on Startup via a rclonemount.command script attachte to the login of the user.
The terminal stays open and after large transfers the below happens:

Output from Terminal:
Last login: Mon Oct 14 13:12:02 on ttys000

The default interactive shell is now zsh.
To update your account to use zsh, please run `chsh -s /bin/zsh`.
For more details, please visit https://support.apple.com/kb/HT208050.
/Users/username/Documents/Skript/rclonemount.command ; exit;
PCNAME:~ username$ /Users/username/Documents/Skript/rclonemount.command ; exit;
logout
Saving session...
...copying shared history...
...saving history...truncating history files...
...completed.

[Prozess beendet]

The rclonelog which it generated (see also mount commands in the log) is as follows:

2019/10/14 13:40:49 DEBUG : UploadDump/movies/50GBRemux.mkv: transferred to remote
    2019/10/14 13:40:49 DEBUG : UploadDump/movies/50GBRemux/50GBRemux.mkv(0xc000146e40): >close: err=<nil>
    2019/10/14 13:40:49 DEBUG : &{UploadDump/movies/50GBRemux/50GBRemux.mkv (rw)}: >Flush: err=<nil>
    2019/10/14 13:40:49 DEBUG : rclone: Version "v1.49.3" finishing with parameters ["rclone" "mount" "gdrive:" "/Volumes/gdrive" "--umask" "000" "--timeout" "1h" "--allow-other" "--allow-non-empty" "--vfs-cache-mode" "minimal" "--dir-cache-time" "5m" "--vfs-read-chunk-size" "64M" "--vfs-read-chunk-size-limit" "2G" "--vfs-cache-max-size" "50G" "--buffer-size" "64M" "--log-file=/Users/username/Documents/logs/rclonelog.txt" "--log-level" "DEBUG"]

I have no idea why and how that happens. I moved the folder with the file from an external SDD to the gdrive mount. The file moves pretty fast (from external SSD to internal SSD). The gui in finder stops a few seconds before completion and I see rclone uploading to gdrive if I check my router. After there is no more uploading visible from my router the Finder shows me an unexpected error and says it couldn't move the file (Error 100057). Rclone then seems to crash?

After that i have to manually remove the mount and remount again.

Anyone an idea?

Thanks a lot in Advance and best regards from switzerland.

lukas

EDIT: Another issue: Rclone is "only" uploading with about 320 to 345 mbit/s. Any chance in increasing that? I have a 1 Gbit/s connection. MacMini is wired over ethernet and able to saturate that 1Gbit/s in various speedtests to different servers in Switzerland and other european countries (Frankfurt, Germany, Paris, France, obviously Zurich, Switzerland).

The finish commands mean something stopped/killed it as it didn't crash.

For moving large files, it's usually much better to use rclone move or something else as it's somewhat slow to move / upload on the mount as it's all single threaded.

Can you update to the latest and post the full debug log?

There was an old issue here:

Which seems similar.

I can help with this at least.
set this flag:
--drive-chunk-size 64M
or 128M if you have a lot of free memory (default is only 5M, which is way too low for high throughput).
It will help quite drastically in increasing utilization of your bandwidth. Often as much as 20-40%. It only affects uploads specifically.

If you prefer you can set it in the config file instead of the rclone command, like this, under grdive remote:
chunk_size = 64M
(no need to use both)

Be aware this much memory can be used for EACH active transfer, so with the default 4 transfers that is 64M x 4 = 256M. Don't run out of memory or rclone will crash. Going above 128M has very little benefit as you get diminishing returns the larger the chunk-size goes.

How long are the transfers - longer than 15 minutes?

If so you'll need to increase the --daemon-timeout parameter.

macOS doesn't like it when the filing system goes away for a long time and kills the process.

I have a plan to fix this properly with a delayed upload that the file system isn't waiting for, but hopefully that should get it working.

The other thing you can do is do the transfers with rclone copy which has much better error recovery than the mount.

Updated to the latest and it still crashed tonight.
I guess the transfers are much longer than 15 minutes. Attached the complete log.txt.

(342 MB removed- no longer necessary)

I didn't find any noticeable errors.

Relevant line:

2019/10/14 22:12:59 DEBUG : Media/Movies/Anna (2019) - (tt7456310)/Anna 2019 Bluray-2160p Radarr.mkv: ChunkedReader.Read at 4083019776 length 1048576 chunkOffset 3953000448 chunkSize 134217728
2019/10/14 22:12:59 DEBUG : rclone: Version "v1.49.5" finishing with parameters ["rclone" "mount" "gdrive:" "/Volumes/gdrive" "--umask" "000" "--timeout" "1h" "--allow-other" "--allow-non-empty" "--vfs-cache-mode" "minimal" "--dir-cache-time" "5m" "--vfs-read-chunk-size" "64M" "--vfs-read-chunk-size-limit" "2G" "--vfs-cache-max-size" "60G" "--buffer-size" "256M" "--drive-chunk-size" "256M" "--log-file=/Users/lukas/Documents/logs/rclonelog.txt" "--log-level" "DEBUG"]`

Amended --daemon and --daemon-timeout 2h

Does that sound sufficient?

@thestigma
Thanks for the tip with --drive-chunk-size. After some testing I am using 256M now. This gives me around 550Mbit/s. Using 512M leads to higher bursts (up to 650Mbit/s) but the average is roughly the same.

Provided the transfers are done in 2 hours then yes.

In your snipped above you set --timeout which is not the same as --daemon-timeout.

I couldn't see your log - I got permission denied for some reason.

@ncw
Ups sorry. Set it only for my organization. It should be right now:
I set the --daemon-timeout flag after it crashed again last night.
At the moment it is running fine.

OK let me know how it goes!

I looked through the log - I couldn't see anything obvious...

Will do!

Is there any disadvantage in setting --daemon-timeout to say 10 hours?

Not really. If rclone did stall then the kernel would notice only after 10 hours. However I expect you'd notice before then and kill it and restart!

I have set the --daemon-timeout to 10h and it seems to work fine now.

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.