I mount my gdrive on Startup via a rclonemount.command script attachte to the login of the user.
The terminal stays open and after large transfers the below happens:
Output from Terminal:
Last login: Mon Oct 14 13:12:02 on ttys000
The default interactive shell is now zsh.
To update your account to use zsh, please run `chsh -s /bin/zsh`.
For more details, please visit https://support.apple.com/kb/HT208050.
/Users/username/Documents/Skript/rclonemount.command ; exit;
PCNAME:~ username$ /Users/username/Documents/Skript/rclonemount.command ; exit;
logout
Saving session...
...copying shared history...
...saving history...truncating history files...
...completed.
[Prozess beendet]
The rclonelog which it generated (see also mount commands in the log) is as follows:
I have no idea why and how that happens. I moved the folder with the file from an external SDD to the gdrive mount. The file moves pretty fast (from external SSD to internal SSD). The gui in finder stops a few seconds before completion and I see rclone uploading to gdrive if I check my router. After there is no more uploading visible from my router the Finder shows me an unexpected error and says it couldn't move the file (Error 100057). Rclone then seems to crash?
After that i have to manually remove the mount and remount again.
Anyone an idea?
Thanks a lot in Advance and best regards from switzerland.
lukas
EDIT: Another issue: Rclone is "only" uploading with about 320 to 345 mbit/s. Any chance in increasing that? I have a 1 Gbit/s connection. MacMini is wired over ethernet and able to saturate that 1Gbit/s in various speedtests to different servers in Switzerland and other european countries (Frankfurt, Germany, Paris, France, obviously Zurich, Switzerland).
The finish commands mean something stopped/killed it as it didn't crash.
For moving large files, it's usually much better to use rclone move or something else as it's somewhat slow to move / upload on the mount as it's all single threaded.
Can you update to the latest and post the full debug log?
I can help with this at least.
set this flag: --drive-chunk-size 64M
or 128M if you have a lot of free memory (default is only 5M, which is way too low for high throughput).
It will help quite drastically in increasing utilization of your bandwidth. Often as much as 20-40%. It only affects uploads specifically.
If you prefer you can set it in the config file instead of the rclone command, like this, under grdive remote: chunk_size = 64M
(no need to use both)
Be aware this much memory can be used for EACH active transfer, so with the default 4 transfers that is 64M x 4 = 256M. Don't run out of memory or rclone will crash. Going above 128M has very little benefit as you get diminishing returns the larger the chunk-size goes.
@thestigma
Thanks for the tip with --drive-chunk-size. After some testing I am using 256M now. This gives me around 550Mbit/s. Using 512M leads to higher bursts (up to 650Mbit/s) but the average is roughly the same.
@ncw
Ups sorry. Set it only for my organization. It should be right now:
I set the --daemon-timeout flag after it crashed again last night.
At the moment it is running fine.