Recommended Dropbox (Formally Google Drive) and Plex Mount Settings

I use mergerfs just to avoid the added issues of not being able to hardlink and I don’ thave to deal with the movement of files/partials/odd issues that happen if I write directly to the mount.

It adds a very small bit of overhead so it’s not even noticeable to be honest.

Sorry to bump this thread, but have a question to the below point;

I use the mergerfs so I can use hard links instead of having Sonarr/Radarr copy which creates the double space and IO when files get completed.

I’m following your setup with rclone/mergerfs, however Sonarr appears to be copying the file rather than using hard links. I have the hardlink setting on in Sonarr, and it was working fine prior to using mergerfs - any ideas?

You need to have all the files on the same disk. So for me, I have a /data/local directory.

In that directory, I have my torrents:

felix@gemini:/data/local$ ls -al
total 20
drwxrwxr-x  5 felix felix 4096 Oct 22 10:49 .
drwxrwxr-x 16 root  felix 4096 Oct 18 07:45 ..
drwxrwxr-x  3 felix felix 4096 Oct 22 10:49 Radarr_Movies
drwxrwxr-x  5 felix felix 4096 Sep  2 10:25 torrents
drwxrwxr-x  7 felix felix 4096 Oct 22 09:14 TV

So based on my setup, it always writes first to /data/local. My mergerfs combined mount is /gmedia

So you should be able to test:

felix@gemini:/gmedia$ cp /etc/hosts .
felix@gemini:/gmedia$ ln hosts test
felix@gemini:/gmedia$ ls -al
total 28
drwxrwxr-x  5 felix felix 4096 Oct 22 14:34 .
drwxr-xr-x 26 root  root  4096 Aug 31 07:18 ..
-rw-r--r--  2 felix felix  345 Oct 22 14:34 hosts
-rw-rw-r--  1 felix felix    0 Apr 14  2018 mounted
drwxrwxr-x  1 felix felix    0 Jun 17 10:24 Movies
drwxrwxr-x  3 felix felix 4096 Oct 22 10:49 Radarr_Movies
-rw-r--r--  2 felix felix  345 Oct 22 14:34 test
drwxrwxr-x  5 felix felix 4096 Sep  2 10:25 torrents
drwxrwxr-x  7 felix felix 4096 Oct 22 09:14 TV
drwxrwxr-x  1 felix felix    0 Jun 30 12:55 TV_Ended

Right, think I’m with you.

So Sonarr & Radarr write to /data/local rather than /gmedia, is that right? Therefore when the file is copied to gdrive overnight you then have 2 copies, one in your torrents folder & one in the gdrive?

I have a very similar setup, whereby I have a mergerfs mount @ /gmedia, which is a combination of /mnt/gdrive (rclone remote) & /home/media (local), both containing movies/tv folders, and the local also containing a downloads folder.

Sorry as I may have confused you with my explanation.

All my items point to /gmedia

By using mergerfs underneath, /data/local is the same disk that is being written to so my hard links work. You can see in my output I did all my testing/copies on /gmedia.

Ah I see. I think the problem is my download folder is set to the local rather than the mergerfs, but Radarr/Sonarr are set to the mergerfs which is why it’s copying.

Surely the same could be achieved by setting both Deluge & Sonarr to the local? What benefit is there setting them to the mergerfs?

The benefit is that the local and cloud items are all consolidated under /gmedia so all the local shows and cloud shows are all under a single mountpoint for me called /gmedia.

When I move stuff to the cloud each night, there is no changing of paths in plex or anything else.

I wanted to report back that I finally got a chance to look at my issues and everything seems to be running smoothly now. I have everything mounted w/ no errors. Now on to the nightly synch!

Thank you again for all the time and effort sharing and helping others!

1 Like

Hi there,
i finally managed to update my configuration and played with the different caches. I came to the same conclusion, mount with vfs-read-chunk has a much faster starting time for streams than a cache mount. I am quite confident with my new configuration, the only thing that bothers me is: your fresh listing times are much faster than mine, allthough i have less files.

time find . |wc -l
18649
|real 19m52,723s|
|user|0m0,472s|
|sys|0m0,936s|

and for a second one

time find . |wc -l
18649
real 0m2,255s
user 0m0,132s
sys 0m0,452s

which looks good.

My mount command:

rclone mount encgdrive: /home/nas/.gdrive/ --read-only --allow-other --buffer-size 100M --dir-cache-time 72h --drive-chunk-size 32M --fast-list --vfs-read-chunk-size 20M --vfs-read-chunk-size-limit 1280M

I have a gigabit connectin as well, allthough my current router throughput is just arround 300mbit/s down/up.But i don’t see how this could be the problem. I use my own client id.
Any ideas?

I’m guessing you have many more directories than me perhaps.

Try

felix@gemini:/gmedia$ find . -mindepth 1 -type d | wc -l
911
felix@gemini:/gmedia$ find . -mindepth 1 -type f | wc -l
27649

That’s my directory and file count. I have a few folders I drop movies in instead of keeping them in a specific folder so that part is a bit faster.

If you enable --rc you can use

rclone rc --timeout=1h vfs/refresh recursive=true 

to fill the dir-cache.

Otherwise --fast-list will not give you any benefit (and will silently be ignored).

1 Like

Yes, i do…

find . -mindepth 1 -type d |wc -l
4194
find . -mindepth 1 -type f |wc -l
14454

but i thought Gdrive handles directories as files? Or is it slower, because it asks with “fast-list” for files in every directory? And because there are nested directories it has to ask for every subdirectory again?

Could be. I wanted to try again without fast-list, as i read it ist just faster when there are lots of files in each directory, but i am hesitant right now to wait another 20 minutes for each try to build the listing cache…

From a problem perspective, I wouldn’t worry too much to be honest. Each directory is a API hit so I combined a bunch of my old movies only into a folder.

felix@gemini:/gmedia/Movies$ ls |wc -l
1944

So instead of having 2000 directories, I have one and with fast-list, that one comes back quickly. Plex only supports having a lot of files in a folder for Movies and not TV Shows.

The only time that full list comes into play is during the first plex scan which would be slow until the cache is in memory.

So is it better to have all files in one folder, so instead of a separate folder for each movie?

Better would be a relative term. It would make less API hits and a faster listing. It might not be good for other things, but in my case, I decided to give it a try :slight_smile:

All the movies I have there are not managed by Radarr since Radarr still doesn’t support flat folders.

Did you actually compare mount with fast-list and without? I read https://github.com/ncw/rclone/issues/2542 and it looks like the normal mount does not support fast-list yet… Although it might be with vfs…

vfs/refresh: Refresh the directory cache.
This reads the directories for the specified paths and freshens the directory cache.
If no paths are passed in then it will refresh the root directory.

rclone rc vfs/refresh
Otherwise pass directories in as dir=path. Any parameter key starting with dir will refresh that directory, eg

rclone rc vfs/refresh dir=home/junk dir2=data/misc
If the parameter recursive=true is given the whole directory tree will get refreshed. This refresh will use --fast-list if enabled.

So maybe the original listing will use it as well? My server is in use currently, will try it out as soon as possible.

mount can’t use --fast-list, except when using the vfs/refresh rc function. This is because --fast-list only works on directory trees, not individual folders.
vfs/refresh can utilize --fast-list when the recursive=true argument is given, because this will load the complete directory tree for the given paths.

Also mount always uses vfs.
vfs is the common base package for the commands mount, cmount and serve (and maybe others I forgot).
All options prefixed with --vfs- will work on all these commands.

Ok, let me try to understand this:

--fast-list only works on directory trees, not individual folders.

so it doesn’t help if rclone request to list all files in a specific folder. But it would help to get the directoy tree.

If i have the follwowing structure

a
— aa
— --- file.a
— --- file.b
— ab
— --- file.c
— --- file.d
— --- file.e
— ac
— --- aca
— --- — file.f
— --- — file.g
etc.

and use vfs/refresh
it will use fast-list to get

a
— aa
— ab
— ac
— --- aca

and then request the files in the folders? So 1 api hit for fast-list plus 4 api hist to get the files in the folders?
But if i just use mount and list the files (for example via find .) it will use 5 API hits to get all files. The only difference is it will be faster to get the directory tree with fast-list (when having complex/big trees) than iterating through all folders (which rclone has to do even than, because it needs the files in the folder?)

I am not quite sure i understand fast-list … Is there any documentation outside of rclone to help me get an understanding of this conecpt?

No, “normal-list” and fast-list request (list) the directory entries. The normal version can only list one folder at a time, but fast-list does many at once (the number of API calls depends on the used remote).

Because of this the find . times are mostly dependent on the number of directories, not the number of files in the tree.

Edit:

The rclone documentation describes fast-list

Holy ****,

ok, unmount and mount again to delete cache, then i did not enter the mount at all.

time rclone rc vfs/refresh recursive=true
real 0m35,834s
user 0m0,020s
sys 0m0,024s

time find /home/nas/.gdrive/. |wc -l
18657
real 0m1,955s
user 0m0,100s
sys 0m0,324s

So basically reduced the time for a fresh listing from 19+ minutes to 30 seconds…
So good. Thank you very much!
And if i understand this correctly, mount cannot do this directly, because it doesn’t know it should build the complete file/folder structure if you populate the cache with find .
Awesome, thanks again!