Questions about rclone cache and GD + mergerfs

What is the problem you are having with rclone?

Currently I use google drive mount splitter into two remotes(1 drive 2 configurations), one remote config is called mkv (which is added flag --include ".mkv" to run the rclone), and another one is called poster(--exclude ".mkv"). My google drive's structure is like FOLDER-[MOVIE,POSTER,INFO]. The reason why I do this is to make sure that all info and posters are fully downloaded(or cache?) except for movies file so that i can use mergerfs to merge the vfs-writes mkv mount and vfs-full mount and emby can show all the posters as fast as possible without using google's API. Or to put it plainly, read posters only from the disk instead of the google drive. However, I didn't find any cached files in the root/.cache/rclone folder until i open one poster and it shows up(but only that movie's info), and although i do a library scan in emby, no posters or info are downloaded(cached), and the seemingly "cached" posters show up insanely slow like it is refetched from google when i use emby. So how can i predownload(precache) all the info and posters of a drive mount and make emby shows all the posters and info instantly??

What is your rclone version (output from rclone version)

rclone v1.55.1

  • os/type: linux
  • os/arch: amd64
  • go/version: go1.16.3
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

The services files and config are as follows.
The mkv mount:

[Unit]
Description=Rclone
AssertPathIsDirectory=/home/mkv
Requires=rclonex.service
After=rclonex.service

[Service]
Type=simple
ExecStart=/usr/bin/rclone mount mkv: /home/mkv  --allow-other --allow-non-empty --vfs-cache-mode writes --umask 002 --transfers 10 --buffer-size 32M  --dir-cache-time 24000h --vfs-read-chunk-size 64M --vfs-read-chunk-size-limit 1G --poll-interval 30s --filter-from /home/filtermkv.txt

ExecStop=/bin/fusermount -u /home/mkv
Restart=on-abort
User=root

[Install]
WantedBy=default.target

The poster mount:

[Unit]
Description=Rclone
AssertPathIsDirectory=/home/poster
After=network-online.target

[Service]
Type=simple
ExecStart=/usr/bin/rclone mount poster: /home/poster  --allow-other --allow-non-empty --dir-cache-time 24000h --umask 002 --vfs-cache-mode full --vfs-cache-max-size 40G --bwlimit-file 10M --vfs-cache-max-age 99999h --vfs-cache-poll-interval 5m --vfs-write-back 9999999m --no-checksum --no-modtime --read-only --filter-from /home/filterposter.txt

ExecStop=/bin/fusermount -u /home/poster
Restart=on-abort
User=root

[Install]
WantedBy=default.target

The mergerfs service:

[Unit]
Description=mergerfs mount
Requires=rclonem.service
After=rclonem.service

[Service]
Type=forking
ExecStart=/usr/bin/mergerfs /home/poster:/home/mkv /home/gdrive -o rw,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=auto-full,dropcacheonclose=true
KillMode=process
Restart=on-failure

[Install]
WantedBy=multi-user.target

The rclone config contents with secrets removed.

[poster]
type = drive
scope = drive.readonly
token = ***
team_drive = ****
root_folder_id = 

[mkv]
type = drive
scope = drive.readonly
token = ***
team_drive = ***
root_folder_id = 

You are using vfs cache mod writes and I think you want full instead.

Writes only caches writes that happen and nothing else.

Allow non empty is generally not good to use as it allows for over mounting and hiding things.

Thank you animosity, but i indeed use vfs cache full on the poster drive, it just doesn't seem to be working. Can posters be fully downloaded for instant use without needing to be used once to trigger the download? And though it is set to full mode, the loading time of those posters which had been opened for several times and cached in the path /root/.cache/rclone folder is still pretty long, just seems like they are being downloaded again from google drive.

Sorry as I missed that on that command line.

You should see a folder called 'cache' and it should have the directory structure and whatnot of files in there.

To "precache" everything, you have to run some command and read every file as there's no way to do it otherwise.

So you can loop through and find everything and cat it based on some filters I'd imagine and that would work to 'load' everything into the cache or just let it do it naturally.

I don't put metadata on my cloud remote as I just keep all that local.

If i finish the whole read "through" and the cache folder contains all the metadata, will the reading of the metadata be redirected to the cache folder whithout much latency? Or do i need to set the emby library path to the cache folder and keep reading the mount folder in case new metadata is added? And how do you keep the meatadata local? I see that when I use emby scanning it grabs metadata(if the movie doesn't have the metadata/ new movie) and put the posters and info files into the mount folder and they just go to the cloud drive. How can i grab metadata and keep it local instead of being upload to drive? Because I see it might be the perfect solution.

Unless you changed your library settings to keep the metadata with the media, it keeps it in /var/lib/emby or something along close to that as I don't use Emby but Plex, which is basically the same in that regard.

Thank you so much for your help! I will do some test for a week and see how it will go on!

After my experiments, I finally find the perfect solution. I download all the files except for movie files( so only posters and info files are downloaded) to disk by running rclone copy. And a script is set to make sure that every day i fetch the new posters automatically from google drive to disk. Now after the whole library scanning, the posters show up like lightning. I am so so excited! Thank you so much Animosity022! And your personal configurations and all those experiences really impress me. Thank you!

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.