Sometimes when I mount my gcrypt drive and try to open a folder the file explorer hangs (not repsonding) and gets stuck. I have to terminate the command to fix it. Sometimes it hangs for around 10-20 secconds and is able to open the folder. Sometimes the folder opens without any delay.
I have noticed it usually happens when new files have been uploaded to that folder or If I have not opened that particular folder before.
I also use rclone browser app on my android device and that works without hanging and loads up folders faster for some reason using the same .conf file.
Is it due to some setting I have set wrong or anything like that?
What is your rclone version (output from rclone version)
go version: go1.12.3
Which OS you are using and how many bits (eg Windows 7, 64 bit)
Windows 10, 64 bit
Which cloud storage system are you using? (eg Google Drive)
Google Drive with cache and crypt
The command you were trying to run (eg rclone copy /tmp remote:tmp)
This is very likely a result of the mount having to re-list the directory.
A mount can not take advantage of --fast-list and thus it can be slow at time when there are large directory structures. This is not an rclone bug - but an unfortunate limitation of how the OS speaks to harddrives.
It shouldn't generally hang for as long as you experience though - but I have noticed it happen on occasion myself. The cache backend you are using could be aggravating the problem, depending on settings.
My own ideal solution is to use aggressive VFS caching and a pre-caching script on startup. In combination with the polling feature Gdrive supports this means you can have all this data already ready to use, but also keep it up to date as files change.
This effectively makes the drive as responsive as a normal harddrive, and I find that invaluable for general use. This way of doing things are only really viable if your Gdrive is a single-user system and files do not unexpectedly change on the Gdrive due to third-party uploads (I can explain in more detail later if interested).
So I think we can do one of two things here:
(1) We can try to dig into your original problem and find out if there is anything wrong we can fix.
If so - please show us your mounting script + rclone configuration file (with redacted passwords and crypt keys). A debug log (use -vv) of the phenomena as it happens would also be greatly beneficial.
(2) I can teach you the details (and minor caveats) associated with the system I use myself. I can also share my scripts if you happen to be on windows. It's not really that complicated, but I would not recommend anyone use this blindly without understanding the caveats and limitations - because in the worst case there you could risk file corruption if you do not understand and respect the limits. I will go into detail in a separate post if you are interested in this.
Regarding the thing you are saying about vfs caching and polling feature etc. I am interested in knowing more about it. In my system only single user uploads the files (me) but 2 other users have read only access to the gcrypt drive.
And I will post the log with -vv whenever next I encounter the problem
Your config looks fine. Very basic, which for troubleshooting is ideal. I have nothing to correct there.
Just one unrelated side-note:
(for Gdrive, not cache). Recommend you set this to at least 64M to greatly bolster upload speed. be aware this will consume (64 x active transfers) MB of memory though. 128M if you have loads of memory to use. More than that tends to be a waste (each doubling has sharply diminishing returns).
Should be keeping attribute and dir info for a good while, but I don't think the cache backend can take advantage of polling, so it may be the refreshing of this that is taking a while sometimes.
It may be worth considering trying not to use cache. I migrated away from it a good while ago. So did Animosity in his Linux/Plex setup. It shouldn't really be needed for good streaming results and the cache backend is not being updated and has a fair amount of issues with it. It kind of depends on why you feel you need it to begin with though...
If you want to proceed more along the line of debugging I think I need a debug log
append to the end of your command:
This enables debug output for technical details and routes the output to file. Try to keep the test short and trigger the problem - as debug long become very long very fast, and the more irellevant stuff that is in there the harder it is to find the problem you need to look at.
Keep cached info about directories and file-attributes "forever"
Ask the Gdrive to provide a list of changes that happened every 10 seconds
The last setting there is fairly aggressive, but it still only uses 1% of your API quota, and it acts as a safety-net to reduce that change of any critical problem happening even if you make a mistake.
So now you won't have to re-list everything all the time but you still have to list it the first time after each time you restart the mount or restart the system.
Additionally we can use a precaching script to get the full cache ready to go in a mere minute - and (if you want to) make that part of the normal startup procedure.
The basics are fairly simply. You need to:
Add --rc to your mount command to enable the remote-control function.
Send a "rclone rc vfs/refresh -v --fast-list recursive=true" to the RC once the drive has finished mounting.
This will make the drive pre-cache the whole thing and keep it in memory for you - and from there the polling will keep it up-to-date.
I will share a script in next post to illustrate.
Your described use-case is ideal (only 1 user uploads).
Using these settings here is the spesific circumstances under which a critical problem could happen:
One user (or at least other rclone instance) must modify an existing file on Gdrive (outside of the mount)
Before 10 seconds has passed (polling interval), another a second user must try to modify that same file through the mount and use out-of-date size information in cache.
The operation must be one of a few that can not automatically detect the wrong info and modify size specifically - potentially leading to the end of the file being chopped off.
As you can see, it's a very spesific set of circumstances that has to happen and there is zero risk if all uploads come from a single source. (be that the mount, or an upload script). This is why I do not recommend this method for any disks that have multiple upladers as you'd have to be very careful.
I only have a 100 Mbps upload connection. So my upload speeds are usually saturated when transferring multiple files, but not when uploading a single file. So changing chunk_size will fix that I guess?
The reason I use cache is because I read online and maybe on this forum also that it was the best setup for plex streaming back when I setup rclone for the first time. Other than that I have no reason to be using cache.
I guess the limitations wont be a problem for me. But I want to mention one thing. I do not keep my drive mounted all the time. I just mount it when I want to watch something on my tv and then after watching I terminate it. Reason why I do this is that currently I do not have truly unlimited broadband connection so I do this as a precaution to prevent unnecessary usage of data bandwidth. But to be honest I am not entirely sure if even keeping the drive mounted uses bandwidth even if nothing is happening. I just reckoned it must be doing that so I dont keep it mounted.
For large files up to or over the chunk size - yes absolutely. Very noticeably too.
I don't use Plex myself, but Animosity is considered the resident veteran on this topic - and he moved away from using cache as well a good while back. I would recommend talking to him if you are unsure. But in general I would not use cache-backend unless you are very clear on why exactly you want it. It has benefits yes - but also downsides - and arguably that weight has shifted over time as the VFS has become better and better while the cache-backend has been dead in the water in terms of development (the original author disappeared). It will likely become irrelevant in the future as the VFS continues to evolve, or at least that is what I suspect (and NCW, the main rclone author seems to think along those same lines).
A mounted drive not being accessed will do nothing. No worries there.
The polling technically will send traffic each interval, but this is of very trivial size and should be irrelevant. We are talking about a handful of bytes. You could also increase the polling interval if you want. No real risk with that on a single-uploader system except that recently uploaded files (uploaded outside of the mount) may take up to (polling interval) seconds to become visible.
Plex s a discussion unto itself. I have limited knowledge of Plex, but I can tell you that you absolutely want to disable most automatic scanning except for basic detection of newly added files (which should take advantage of the precached VFS by the way to be very fast and efficient). Plex does a lot of advanced scanning if you let it do whatever it wants. Some of these scans may even download the entire ibrary to generate preview files and generate advanced statistics. This is fine on a harddrive (like Plex was designed for) it is not fine on a Cloud drive - especially if you have usage limits to think about.
I would again refer you to Animosity's recommended settings post. He goes into detail there on his Plex settings I'm fairly sure. Also - talk to him directly if you need more info. This is all the help I can offer spesifically on Plex I'm afraid
Absolutely. Basic detection of new additions can probably be done exclusively by looking at your existing cached info (which would be fast and free) but any scans that look for more advanced info will be accessing the files - potentially download a lot or even all of it - and generate a lot of traffic. Absolutely take the time to get your Plex-scanning settings right. See Animosity's guide on this.
If you ever want to run a "full scan" you should do so manually when you feel you have the resources and time to spare.