Rclone + encryption + Google Drive + Plex + Windows Serveur 2016

What is the problem you are having with rclone?

Hello everyone,

I am brand new to Rclone.
I would like to point out that I am French and that I speak English very poorly.

Actually, I use Mountain Duck + cryptomator + plex + windows server 2016 + Google drive currently but I have to change my solution because I often get banned when there are scans of new elements.
I was unaware that there was Rclone on Windows and I am testing this solution and I must say that for the moment I am extremely amazed.

Here are my needs and what I want to do:

  • Use Plex with Google Drive.
  • Encrypt data on the fly.
  • Performance as well as mountain duck in download / upload (15/20 mo / s per upload / download with encryption / decryption)
  • Metadata cache to prevent Plex from banning me
  • Metadata cache to speed up Plex scans (because on a tidy 30 TB library, it currently takes 1h30 without adding new elements)
  • Optimal operation for direct readings in Plex but also readings in "transcoding" (for bad connections)

Here is what I have done so far.

  • Creation of a shared folder on Google Drive
  • Creation of a "Remote" on Rclone
  • Creation of a second "Remote" which encrypts the first
  • Installation and use of WINFSP latest version

Quick test:

  • Network drive mount via this: rclone mount mydrive: / x: -> Ok
  • configuration seen on one of the topics: rclone mount --vfs-cache-mode full mydrive:/ x: --fuse-flag --VolumePrefix=\server\share

I would like to tell you that I don't really understand all of these options, but what I want is to be able to access my drive via UNC path.
My second test does indeed make a UNC path and that suits me perfectly because in addition it encrypts the data and the writing speed and about 15-20 mb / s which suits me very well.

I would like to have advice on how to optimize my current configuration (which is quite basic) especially concerning the mounting of my network drive.

I would like someone here to offer me an optimized mount configuration for my needs cited at the start of this topic.

I specify that my SSD is 500 GB, but that I only want allocated 80 GB of maximum cache for Rclone.
What is important is that Rclone always keeps the metadata cache well in memory to avoid Google BANs. (AppData \ Local \ rclone \ vfsMeta)
The entire file cache, are they really useful (AppData \ Local \ rclone \ vfs?) Because if the only objective is to speed up the reading of a few files, it is useless to me.

And I would also like to know if there is something special to be done in rclone.config compared to the base config, if so what can be interesting to do?

Thank you all :slight_smile:

What is your rclone version (output from rclone version)

1.56.2

Which cloud storage system are you using? (eg Google Drive)

Google Drive

hello and welcome to the forum,

--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)

for streaming media, i do not use a vfs cache. i use --vfs-cache-mode=off

cannot see into your computer, cannot see your config file.
so post it and redact id/secret/password/token.

@VBB, your setup looks similar to the OP, what do you suggest?

Hello and thank you to you,

"what kind of bad connections, internet or local?"
My Plex server and Rclone have a very good connection.
I was talking about those who watch remotely on my Plex server and who have a bad connection (Plex adapts to their connection by lowering the quality, thanks to transcoding).

This command "--vfs-cache-mode = off" allows to deactivate the cache (which keeps the whole file in memory) but does not deactivate the metadata cache?
If so, then I see that each file in the metadata cache is 1kb, so I'm very very large with 80gb right?

rclone.config :

[Serie]
type = drive
client_id = ***
client_secret = ***
scope = drive
token = {"access_token":"","token_type":"Bearer","refresh_token":"","expiry":"***"}
team_drive = ***
root_folder_id =

[SerieCrypt]
type = crypt
remote = Serie:
password = ***
password2 = ***

as per the documentation.
"it is recommended to point the crypt remote to a separate directory within the wrapped remote"
so i suggest to use a subfolder like so.
remote = Serie:plex

as for the other questions, @VBB and @Animosity022 will know.

Sounds like you have the basics worked out. Like @asdffdsa and myself, I would suggest you start without a cache and see how this works for you. Here's the mount command I currently use:

rclone mount --attr-timeout 5000h --dir-cache-time 5000h --drive-pacer-burst 200 --drive-pacer-min-sleep 10ms --poll-interval 0 --rc --read-only --user-agent (random name) --vfs-read-chunk-size 1M -v (name of crypt remote): (drive letter):

Note that my mount is read-only and static, as I do not make changes to it directly. I upload via RcloneBrowser and make final adjustments via a separate, basic read/write mount.

Once the mount is running, I follow it up with this command:

rclone rc vfs/refresh recursive=true --drive-pacer-burst 200 --drive-pacer-min-sleep 10ms --timeout 30m --user-agent (random name)

This primes the mount by creating sort of a meta cache of all file and folder attributes in memory. It helps speed things up tremendously in Windows Explorer. Note that once you stop the mount the cache is also gone, and you would have to prime again on re-mount.

That's pretty much it as far as Rclone is concerned. In Plex, make sure to uncheck "Perform extensive media analysis during maintenance" in Scheduled Tasks. For each individual library, uncheck "Enable video preview thumbnails" and "Enable intro detection".

If you want to reduce your scan times, make sure to use the new Plex Scanner/Agent combo. Once I switched, I went from over three hours to a few minutes.

Thanks to you and @asdffdsa for all the explanations.

I just created a subfolder and I encrypted this one.
Clearly I have my shared drive which is not encrypted and I have encrypted a subfolder.
During the transfer it seems to have worked correctly.

[parent folder]
type = drive
client_id = *****
client_secret = *****
scope = drive
token = {"access_token":"","token_type":"Bearer","refresh_token":"","expiry":"*****"}
team_drive = *****
root_folder_id =

[subfolder]
type = drive
client_id = *****
client_secret = G*****
scope = drive
token = {"access_token":"","token_type":"Bearer","refresh_token":"","expiry":"2021-10-11T19:19:52.65022+02:00"}
team_drive =

[ServeurScrypt]
type = crypt
remote = parent:folder:subfolder
password = *****
password2 = *****

I have to analyze your configuration, but it seems quite complicated to understand.
I just ran some quick tests with this mount: rclone mount --vfs-cache-mode full --vfs-cache-max-size 5G (for test) ServeurScrypt:/ x:
It happens that when the size is more than 5go it erases part of the local cache but also the metadata cache.
Wouldn't the easiest way be to disable the large local cache (which keeps files in memory) and leave only the metadata cache?
If I turn -vfs-cache-mode off there is no more cache at all.
The advantage would be to only keep the metadata cache to speed up the scan and avoid Google ban (because I believe that this is only what causes BANs)

Regarding your configuration for "editing by creating a kind of meta-cache", when should I put this command?
If I stop editing, I will lose this "metadata cache".
If I reactivate the assembly and that I redo your order, I will find all the cache that was done before?

The goal is to use as little disk space as possible while maintaining the metadata cache and avoiding google bans.

Here is currently my batch concerning the scan:
echo -- Lancement du programme %date%-%time% -------------------------------------------->> C:\Scripts\log_ScanPlexSerieAnimation.log

"C:\Program Files (x86)\Plex\Plex Media Server\Plex Media Scanner.exe" --scan --refresh --section 24 >> C:\Scripts\log_ScanPlexSerieAnimation.log

echo -- Fin du programme %date%-%time% -------------------------------------------------->> C:\Scripts\log_ScanPlexSerieAnimation.log

I don't understand : "make sure to use the new Plex Scanner/Agent combo."

I would like to be read / write on the network mount to download and upload everything through this.
I will put this configuration on two servers (one which concerns the movie server and the other serial server)

thank you so much

That's why I suggested to try without cache mode first. What you call metadata cache is basically what the prime command does, but it doesn't take up any space, because it's all kept in memory. That "cache" is static, though, meaning it won't update unless you run the prime command again. This works really well, as long as you run it each time before a Plex scan. I do this once a day.

Look at each of your Plex libraries to see which version of the scanner they're using. If they're older libraries, they would use the legacy scanner by default. That's the one that's really slow, depending on your folder structure. The new ones are called "Plex Movie" and Plex TV Series". Note that changing them in an existing library will trigger a scan.

If you want your mount to refresh and reflect new uploads, then don't use --poll-interval 0 and --read-only. Why separate servers for movies and shows, and not simply two libraries within the same server?

Sorry to take your time, because I'm really noob in this area.
When I do the mount command with the Rclone cache, if I delete items in my network drive, then these automatically disappear in my vfs and vfsMeta cache, so it doesn't look "static".
It's amazing that Rclone can't automatically add the metadata data in: vfsmeta on every add without adding the VFS data.

So I don't understand what is static.
What is the "prime" command that you mention?

Thanks for these informations.
Actually I am currently using the agent "The Movie database" for the series.
So I should replace it.

Thank you for his information
In your configuration, what is preventing Google BANs regarding scans of new items?

If I understand correctly I have to perform this command to mount my network drive:
rclone mount --attr-timeout 5000h --dir-cache-time 5000h --drive-pacer-burst 200 --drive-pacer-min-sleep 10ms --rc --user-agent (random name)What should I put as a name? (randomly?) --vfs-read-chunk-size 1M -v (name of crypt remote): (drive letter):

Then I do this command in CMD : rclone rc vfs/refresh recursive=true --drive-pacer-burst 200 --drive-pacer-min-sleep 10ms --timeout 30m --user-agent (The same random name that I put before)

Every night I run my BATCH that I showed you earlier?
I run your second CMD command (the one after mounting) after each addition of new elements?
And that's all?

That's the difference between an actual cache on disk and the prime cache in memory. They are completely different. If you don't mind using up space, use the actual cache. The prime command I'm referring to is:

rclone rc vfs/refresh recursive=true --drive-pacer-burst 200 --drive-pacer-min-sleep 10ms --timeout 30m --user-agent *******

It only helps with making scans faster, but doesn't cache any actual files on disk. Hope that makes sense.

Yes, that's one of the legacy agents. You'll need to change the scanner within each library.

As @Animosity022 would tell you, there's no such thing as a Google ban under normal circumstances. What you're referring to is simply hitting an API limit for the day. This is also very unlikely, but can happen occasionally due to bad Plex settings. Using Rclone commands such as mine or @Animosity022's here will not get you anywhere near Google's API limits.

Yes, anything you want.

If you only run a scan once a day, then just work the prime command into your script. Make sure to put it after the mount has been established. The mount command has to have --rc for this to work.

So real cache = rclone vfs cache and memory cache = Cache of your configuration of your mount is that it?
Where is the memory cache? If I shut down my server or if it crashes, how do I get this cache (to remember to save this one?)

rclone rc vfs/refresh recursive=true --drive-pacer-burst 200 --drive-pacer-min-sleep 10ms --timeout 30m --user-agent *******
Is that the "prime" command?
(little translation problem on my part)

Currently I'm using MountainDuck and it doesn't seem to have any cache.
The BAN APIs that I come across are only that I scan from new file.
For those existing, even during the scan which lasts 1h30 it does not cause BAN API (when it scans already existing files).
Just by following your config (without following the annex configuration @ Animosity022 ') I wouldn't have any problems?
I do not use Radarr or others.
My Plex settings, on the other hand, have always been properly configured.

The memory cache is not saved on disk, so you lose it if you shut down your server or close your mount. You would have to run the command again to get it back.

Yes, it is.

Try it. Worst that could happen is that you hit some limit again for a day, which is highly unlikely.

This command also retrieves metadata locally without needing to query Google directly.
If I run this command before a scan then Plex will query the cache memory which already contains all the metadata and therefore avoid the API ban, right?

We might have a different understanding of metadata. What this command does is cache folder structure and file attributes (size, date, etc.). This makes browsing (and therefore scanning) in Windows Explorer much, much faster.

When I speak of metadata it is (if I understood correctly) when Plex downloads a small piece of file to download its information and lists it on Plex in the library.

On MountainDuck, in the console, plex analyzing the new files (this generated download lines in the google audit console), then Plex scanned the rest of the files already present and finally uploaded things again to Google for the list (metadata).
I could see this in the audit log.
And I think that's what was causing the API ban.
When I speak of new file it was around 200gb with 6 season of 12 episodes.

these BAN APIs could very well come from the old agent who asked google a little too much.

Well, no. Plex needs to download a small part of a file to get information about it (codec, resolution, etc.). That information is not cached by Rclone. I suggest you do an initial scan on one of your Plex libraries, let it run overnight, and see how it goes. That's really the best you can do. It will still take some time, depending on the size of your library. 30TB should go fairly fast, though.

That is entirely possible.

1 Like

I will try this setup anyway, especially if it works well for you.
I have to think about launching your manual command after each network assembly of Rclone and especially executing it before each scan plex and addition of new element.

Thank you very much for your time, it's really very kind of you.

1 Like

Hello,

I'll give you some quick feedback.
I was unable to test scans on a large library (because I am rebuilding everything) but your mount and script seems to be working fine at the moment.
However, for the moment I am having some concerns:

  • When I send some things I get this message to every file (but I think this is normal as the cache is not enabled) -> WriteFileHandle: Truncate: Can't change size without --vfs-cache -mode> = writes

  • Sometimes I have capricious files: Windows tells me that I do not have the authorization to copy the file because I do not have the rights.
    If I click on start over, it works.
    However I have to do it for each file (when I have the problem)

  • I looked a bit on the forum to do a scheduled task of my edit script, but this is not working.
    I made so that this one opens with system, elevated privilege.
    Here is my montage:
    "cd c:\rclone
    rclone mount --attr-timeout 5000h --dir-cache-time 5000h --drive-pacer-burst 200 --drive-pacer-min-sleep 10ms --rc --user-agent scanserie --vfs-read-chunk-size 1M -v remote: C:\mount\Serveur"

  • Last problem is when I do a scan via Plex Media Scanner.exe.
    This one takes a very long time to complete (even when everything has been scanned), whereas if I do the scan by hand (from Plex) it works fine.
    Here is my script:
    "echo -- Lancement du programme %date%-%time% -------------------------------------------->> C:\Scripts\log_RcloneScanManga.log

"C:\Program Files (x86)\Plex\Plex Media Server\Plex Media Scanner.exe" --scan --refresh --section 28 >> C:\Scripts\log_RcloneScanManga.log

echo -- Fin du programme %date%-%time% -------------------------------------------------->> C:\Scripts\log_RcloneScanManga.log
"

hello, I followed your method wanting to do the same. but when I run rclone rc vfs / refresh recursive = true --drive-pacer-burst 200 --drive-pacer-min-sleep 10ms --timeout 30m --user-agent benking. he then puts me {
"result": {
"": "directory not found"
}
}. is this normal? what am I doing wrong otherwise?

Best to start a new post rather than hijack this thread.

remote: C:\mount\Serveur

Did you modify the end of the editing parameters by replacing "remote: C:\mount\Serveur" your remote with your own?