Gcrpyt Mount hangs while opening folder

What is the problem you are having with rclone?

Sometimes when I mount my gcrypt drive and try to open a folder the file explorer hangs (not repsonding) and gets stuck. I have to terminate the command to fix it. Sometimes it hangs for around 10-20 secconds and is able to open the folder. Sometimes the folder opens without any delay.

I have noticed it usually happens when new files have been uploaded to that folder or If I have not opened that particular folder before.

I also use rclone browser app on my android device and that works without hanging and loads up folders faster for some reason using the same .conf file.

Is it due to some setting I have set wrong or anything like that?

What is your rclone version (output from rclone version)

rclone v1.49.3

  • os/arch: windows/amd64
  • go version: go1.12.3

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Windows 10, 64 bit

Which cloud storage system are you using? (eg Google Drive)

Google Drive with cache and crypt

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone mount gcrypt: E:

This is very likely a result of the mount having to re-list the directory.
A mount can not take advantage of --fast-list and thus it can be slow at time when there are large directory structures. This is not an rclone bug - but an unfortunate limitation of how the OS speaks to harddrives.

It shouldn't generally hang for as long as you experience though - but I have noticed it happen on occasion myself. The cache backend you are using could be aggravating the problem, depending on settings.

My own ideal solution is to use aggressive VFS caching and a pre-caching script on startup. In combination with the polling feature Gdrive supports this means you can have all this data already ready to use, but also keep it up to date as files change.
This effectively makes the drive as responsive as a normal harddrive, and I find that invaluable for general use. This way of doing things are only really viable if your Gdrive is a single-user system and files do not unexpectedly change on the Gdrive due to third-party uploads (I can explain in more detail later if interested).

So I think we can do one of two things here:

(1) We can try to dig into your original problem and find out if there is anything wrong we can fix.
If so - please show us your mounting script + rclone configuration file (with redacted passwords and crypt keys). A debug log (use -vv) of the phenomena as it happens would also be greatly beneficial.

(2) I can teach you the details (and minor caveats) associated with the system I use myself. I can also share my scripts if you happen to be on windows. It's not really that complicated, but I would not recommend anyone use this blindly without understanding the caveats and limitations - because in the worst case there you could risk file corruption if you do not understand and respect the limits. I will go into detail in a separate post if you are interested in this.

Let me know how you want to proceed.

This is my conf file

[gdrive]
type = drive
scope = drive
token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxxx","token_type":"Bearer","refresh_token":"xxxxxxxxxxxxxxxxxxxxxxxx","expiry":"2019-10-02T22:45:04.0657592+05:30"}
client_id = xxxxxxxxxxxxxxxxxxxxxxxx
client_secret = xxxxxxxxxxxxxxxxxxxxxxxx
chunk_size = 8M

[gcache]
type = cache
remote = gdrive:/Plex
plex_url = http://localhost:32400
plex_username = xxxxxxxxxxxxxxxxxxxxxxxx
plex_password = xxxxxxxxxxxxxxxxxxxxxxxx
chunk_size = 8M
info_age = 1d
plex_token = xxxxxxxxxxxxxxxxxxxxxxxx

[gcrypt]
type = crypt
remote = gcache:
filename_encryption = standard
directory_name_encryption = true
password = xxxxxxxxxxxxxxxxxxxxxxxx

My mount command is simply : rclone mount gcrypt: E:

Regarding the thing you are saying about vfs caching and polling feature etc. I am interested in knowing more about it. In my system only single user uploads the files (me) but 2 other users have read only access to the gcrypt drive.

And I will post the log with -vv whenever next I encounter the problem

Your config looks fine. Very basic, which for troubleshooting is ideal. I have nothing to correct there.

Just one unrelated side-note:

(for Gdrive, not cache). Recommend you set this to at least 64M to greatly bolster upload speed. be aware this will consume (64 x active transfers) MB of memory though. 128M if you have loads of memory to use. More than that tends to be a waste (each doubling has sharply diminishing returns).

Your

Should be keeping attribute and dir info for a good while, but I don't think the cache backend can take advantage of polling, so it may be the refreshing of this that is taking a while sometimes.

It may be worth considering trying not to use cache. I migrated away from it a good while ago. So did Animosity in his Linux/Plex setup. It shouldn't really be needed for good streaming results and the cache backend is not being updated and has a fair amount of issues with it. It kind of depends on why you feel you need it to begin with though...

If you want to proceed more along the line of debugging I think I need a debug log
append to the end of your command:
-vv
--log-file=MyLogFile.txt
This enables debug output for technical details and routes the output to file. Try to keep the test short and trigger the problem - as debug long become very long very fast, and the more irellevant stuff that is in there the harder it is to find the problem you need to look at.

In regards to the alternate way of VFS caching, here is the basic setup. Note this only applies to mounts as that is hte only way to use the VFS system currently:

--attr-timeout 8700h ^
--dir-cache-time 8760h ^
--poll-interval 10s ^

This entails:

  • Keep cached info about directories and file-attributes "forever"
  • Ask the Gdrive to provide a list of changes that happened every 10 seconds
    The last setting there is fairly aggressive, but it still only uses 1% of your API quota, and it acts as a safety-net to reduce that change of any critical problem happening even if you make a mistake.

So now you won't have to re-list everything all the time but you still have to list it the first time after each time you restart the mount or restart the system.

Additionally we can use a precaching script to get the full cache ready to go in a mere minute - and (if you want to) make that part of the normal startup procedure.

The basics are fairly simply. You need to:

  • Add --rc to your mount command to enable the remote-control function.
  • Send a "rclone rc vfs/refresh -v --fast-list recursive=true" to the RC once the drive has finished mounting.
    This will make the drive pre-cache the whole thing and keep it in memory for you - and from there the polling will keep it up-to-date.
    I will share a script in next post to illustrate.

LIMITATIONS/CAVEATS:
Your described use-case is ideal (only 1 user uploads).

Using these settings here is the spesific circumstances under which a critical problem could happen:

  • One user (or at least other rclone instance) must modify an existing file on Gdrive (outside of the mount)
    and
  • Before 10 seconds has passed (polling interval), another a second user must try to modify that same file through the mount and use out-of-date size information in cache.
    and
  • The operation must be one of a few that can not automatically detect the wrong info and modify size specifically - potentially leading to the end of the file being chopped off.

As you can see, it's a very spesific set of circumstances that has to happen and there is zero risk if all uploads come from a single source. (be that the mount, or an upload script). This is why I do not recommend this method for any disks that have multiple upladers as you'd have to be very careful.

I only have a 100 Mbps upload connection. So my upload speeds are usually saturated when transferring multiple files, but not when uploading a single file. So changing chunk_size will fix that I guess?

The reason I use cache is because I read online and maybe on this forum also that it was the best setup for plex streaming back when I setup rclone for the first time. Other than that I have no reason to be using cache.

Scripting to do precaching:

While the command you need to send it only really what I described above (and you could easily do it manually with a single copy-paste), you probably will want to automate it.

For Linux-spesific scripting, I'd suggest stealng (ahem, borrowing :P) Animosity's script. He shares them freely here.


For Windows (which I suspect do you apply to you but I will add it for the sake of completeness/illustration), here is an example of what I use:

@echo off
:: If window is not minimized, restart it as minimized
::if not DEFINED IS_MINIMIZED set IS_MINIMIZED=1 && start "" /min "%~dpnx0" %* && exit

::Variables
set driveletter=%1
set RCport=%2

:: Check that the folder is valid, otherwise wait until it is
echo Waiting for %driveletter%: to be ready ...

:LOOP1
vol %driveletter%: >nul 2>nul
if errorlevel 1 (
echo " "| set /p dummyName=.
timeout /t 1 > nul
goto LOOP1
) else (
echo.
echo Drive %driveletter%: OK!,
)

echo.
echo Warming up cache...
echo This may take a few minutes to complete
echo You may use the cloud-drive normally while this runs
echo.

echo Awaiting completion-message from RC...
rclone rc vfs/refresh -v --fast-list recursive=true --rc-addr localhost:%RCport% --rc-user test --rc-pass test --timeout 1h

::alternative slower method that does not use the remote control
::echo.
::echo verifying old-school ...
::%driveletter%:
::dir /s >nul

echo.
echo Cache warmup for %driveletter%: OK!
echo.

echo Closing windows in 5 seconds...
timeout /t 5 >nul

Not as complicated as it looks. The only really crucial bits are the loop to wait for the mount to be ready - and the vfs/refresh command. The rest is just details.

Called from the mounting script like so (truncated example):

::Variables
set driveletter=X
set remotename=TD1Crypt

start /b rclone mount blah blah....

::Start cache warmup
call scripts/VFSCacheWarmup.bat %driveletter% %RCport%

I guess the limitations wont be a problem for me. But I want to mention one thing. I do not keep my drive mounted all the time. I just mount it when I want to watch something on my tv and then after watching I terminate it. Reason why I do this is that currently I do not have truly unlimited broadband connection so I do this as a precaution to prevent unnecessary usage of data bandwidth. But to be honest I am not entirely sure if even keeping the drive mounted uses bandwidth even if nothing is happening. I just reckoned it must be doing that so I dont keep it mounted.

So I also want to ask that does this method use more data bandwidth? and what about pre-caching that must use more data bandwidth or no?

For large files up to or over the chunk size - yes absolutely. Very noticeably too.

I don't use Plex myself, but Animosity is considered the resident veteran on this topic - and he moved away from using cache as well a good while back. I would recommend talking to him if you are unsure. But in general I would not use cache-backend unless you are very clear on why exactly you want it. It has benefits yes - but also downsides - and arguably that weight has shifted over time as the VFS has become better and better while the cache-backend has been dead in the water in terms of development (the original author disappeared). It will likely become irrelevant in the future as the VFS continues to evolve, or at least that is what I suspect (and NCW, the main rclone author seems to think along those same lines).

A mounted drive not being accessed will do nothing. No worries there.
The polling technically will send traffic each interval, but this is of very trivial size and should be irrelevant. We are talking about a handful of bytes. You could also increase the polling interval if you want. No real risk with that on a single-uploader system except that recently uploaded files (uploaded outside of the mount) may take up to (polling interval) seconds to become visible.

I think plex tries to access the drive again and again which might cause some bandwidth usage. I guess I will have to conduct some sort of test using network monitoring tool overnight.

Technically it's traffic, yes.
But again, listing are very small, so if you have the useage-quota to even watch a single video to begin with I really doubt this will be a problem for you.

A full pre-cache may be on the order of a few MB I would guess (educated guess - never measured this due to it being of irrelevant size for me), and would only be needed once each remount or restart.

Each polling would only be a few Bytes.

The bandwidth thing shouldnt be a problem then with regards to pre caching or pooling but might have to look into plex trying to access the drive and increasing usage.

Plex s a discussion unto itself. I have limited knowledge of Plex, but I can tell you that you absolutely want to disable most automatic scanning except for basic detection of newly added files (which should take advantage of the precached VFS by the way to be very fast and efficient). Plex does a lot of advanced scanning if you let it do whatever it wants. Some of these scans may even download the entire ibrary to generate preview files and generate advanced statistics. This is fine on a harddrive (like Plex was designed for) it is not fine on a Cloud drive - especially if you have usage limits to think about.

I would again refer you to Animosity's recommended settings post. He goes into detail there on his Plex settings I'm fairly sure. Also - talk to him directly if you need more info. This is all the help I can offer spesifically on Plex I'm afraid :slight_smile:

Absolutely. Basic detection of new additions can probably be done exclusively by looking at your existing cached info (which would be fast and free) but any scans that look for more advanced info will be accessing the files - potentially download a lot or even all of it - and generate a lot of traffic. Absolutely take the time to get your Plex-scanning settings right. See Animosity's guide on this.

If you ever want to run a "full scan" you should do so manually when you feel you have the resources and time to spare.

All right I will look into Animosity's settings. His setup is a little bit different I guess with mergefs and whatnot but I will try to find something that can help me or just talk to him directly.

Regarding getting rid of cache. Is it as simple as editing the conf file or do I need to do something else?

You can disregard that stuff. That is not really related.

What you want to look at is spesifically

  • His precaching script
  • His recommended Plex settings

Those will be relevant to you regardless of his mergerFS stuff.