Your rclone log doesnt indicate errors and you say that explorer itself crashes.
Does explorer actually crash or just freeze up? ... this is important to be spesific about.
It would help if you specify exactly how and then the explorer crash happens. When does it happen, and does anything spesific trigger it? Is it repeatable? Does it actually crash or just become unresponsive?
This is not something I am very familiar with, but a quick google indicates it performs the operations synchronously, so this will probably make explorer freeze until operations are done. What is your reason for using this, and are you sure this isn't the cause of problems?
This is pretty low, but should only affect performance. Unrelated to the issue, but unless you have a very low bandwidth I'd suggest increasing it. 128M is the default, which is fine for most uses and has good performance.
This shouldn't be necessary and will only limit your performance. The Gdrive pacer should handle this for you automatically and allow you to burst the API within your limits without spamming and getting limited as a result.
Generally not recommended. Mounting into non-empty folder will have potential for a lot of problems to happen. Only do this if you are an advanced user and know exactly what this means and why you need it.
This is already the default value.
Also you have --dir-cache-time set twice. Only the last parameter will actually be used in this case.
A long --dir-cache-time is perfectly ok on a Gdrive as it uses polling to keep up to date.
I've updated to the latest version (1.48.0)
And made changes to the mount code as specified:
Im using NSSM to create a service to mount the drive.. mount --log-file "C:\Users\Shenniko\.config\rclone\Rclone_Log_Files\Mount.txt" --log-level INFO --allow-other --dir-cache-time=160h --buffer-size=64M --vfs-read-chunk-size=1G --vfs-cache-max-age=5m --vfs-cache-mode writes gdrive_media_vfs: N: --config "C:\Users\Shenniko\.config\rclone\rclone.conf"
and am still having issues with the mount.
Usually when there is a stable connection, it shows the drive name as "gdrive_media_vfs", but when its having issues, it just shows "Local Disk", i click on it, and i get the spinning wheel, and the explorer becomes unresponsive..
I erroneously quoted your chunk-limit instead of size earlier, sorry. It's the chunk-size that was a bit low (I see you later increased it). The default is 128M and that's usually a good number for most. In any case I doubt a very high or low number on this is related directly to your problem.
I think it would be worth monitoring your memory usage while sonarr is doing that operation. See that it's not going nuts on concurrent connections. A buffer-size of 64M is not that unreasonable, but that's still pr. connection so it ultimately will depend a bit on how aggressive the software is on concurrent file-access. If that seems to be the case you can either see if the software has settings to adjust this directly, or you can attempt a lower buffer size. The default is 16M and that's rarely a significant limiting factor on performance.
Changed code to the default 16MB also added a few extras to see if that would help.. but still having issues.. Seems to work fine navigating.. but as soon as Sonnar has downloaded the TV show and then looks to move to the mapped drive.. it just dies.. Sonnar can no longer see the drive and explorer crashes when i try and access the drive.. Closing Sonnar, drive is still FUBAR, restarting the service fixes it...
ok, so think i figured it out... it seems better with whatever code ive tweaked...
But it seems when the download has finished from Sonarr, and then it moves the file from local to the mounted google drive, and im trying to access during that process.. Explorer bugs out.. it doesnt completely crash like it used to.. just locks and takes a LONG time for it to unlock..
This is likely just a result of Sonarr being very aggressive about it's file-requests and drowning out your own list-request which explorer will have to wait to complete before continuing. This would be expected behavior even if it is not very ideal. There is very little rclone can do on it's end to limit a spesific program from aggressively hogging all the available resources. It will just fill requests from the OS to the best of it's ability.
The best solution here might be to set up a secondary service account with a mount dedicated to Sonarr and related programs, so that you can avoid getting your general-use access (like simply browsing the mount) getting choked out. They can both point to the same drive of course. --poll-interval (default 1min) will determine how quickly you would see the changes appear on your main mount. It can be lowered if needed - but this check does use an API call so keep it reasonable. I've tested down to 10s with no problems and that would be using 1% of default API quota.
Or - if you can figure out a way to configure Sonarr to not run so many concurrent file-operations then that would be even cleaner solution that directly adresses the problem - but of course this entirely depends on Sonarr having such an option (without literally changing the code). I am not familiar enough with Sonarr to say if it does or not...
Basically just go into your config and make a copy of your existing setup with a new name.
Set up a different authorization (I think a service account would be ideal here if you know how, but Oauth would likely work too)
Make a copy of your existing mount command to use on the secondary drive, but if you use write caching then you probably want to change the directory for that. You also obviously need to change the name of the remote you are mounting to match the new secondary remote. Not much else should need to change
Then mount both - Set Sonarr and other such "bulk programs" that don't need to be very responsiveto use the secondary mount, while you can use your primary for general/manual and other light use and keep it from being overwhelmed.