How do I improve this mount config? what am i doing wrong?

What is the problem you are having with rclone?

Unable to speed rclone mount up or unable to cache the structure
Goal:

  • Mount the teamdrive
  • Cache the directory structure as much as possible
  • When a while is hit, download it and serve it through a file server running that lists the files
  • Make sure when rclone restarts it doesn't wipe this data so next rclone start it wont have to rebuild from zero
  • Keep the files for 24hours at least in case the same file is hit again
  • Read efficiency is the goal as no files will be written through this

I have around 1TB of storage on the HDD to use as local cache.
So
User hits the file server link > rclone starts downloading > file server serves the link to user.

What I am stuck with
Rclone takes time to load the structure and things time out.
I tested by mounting a subfolder from the team drive - it works, slows down if has to load 2 dozen folders listing (just listing and not giving downloads)

Kindly guide on

  • what commands am I supplying wrong
  • what can be improved to keep things cached

What is your rclone version (output from rclone version)

rclone v1.53.4

  • os/arch: windows/amd64
  • go version: go1.15.6

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Windows Server 2019 64bit

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

My mount command

set rcloneflags=--allow-other ^
--vfs-cache-max-age 6h ^
--no-modtime ^
--attr-timeout 8700h ^
--buffer-size 512M ^
--cache-chunk-path "E:\rclone\cachechunkpath" ^
--cache-dir "E:\rclone\cachedir" ^
--dir-cache-time 8760h ^
--poll-interval 30s ^
--drive-chunk-size 32M ^
--timeout 2h ^
--vfs-cache-mode full ^
--vfs-read-chunk-size 128M ^
--vfs-read-chunk-size-limit 2G ^
--cache-db-path "E:\rclone\dbcache" ^
-v ^
--progress ^
--drive-client-id "ID-HERE" ^
--drive-client-secret "SECRET-HERE" ^

set servername=awoo

rclone ^
mount ^
%servername%: "E:\1.%servername%" ^
%rcloneflags%

The rclone config contents with secrets removed.

[awoo]
type = drive
scope = drive
service_account_file = accounts/awoo.json
service_account_file_path = accounts/
root_folder_id = ID-HERE

A log from the command with the -vv flag

NONE

Without any logs, it's a lot of guessing what may or may not be happening.

you can pre-cache the mount. and re-cache it on the fly, as needed.
Google Drive, Plex, Windows 10 - #5 by VBB

these flags require cache remote, which based on your config file, you are not using, so remove them
--cache-chunk-path "E:\rclone\cachechunkpath"
--cache-db-path "E:\rclone\dbcache"

--vfs-cache-max-age=24h

So instead of removing should I use a cache remote?
The chunk paths are aimed such because I would like to keep all this to my E Drive as it has the most storage for any sort of stuff.

How should the cache file be?
[awoo-cache]
remote = ConfigName:
chunk_size = 128M
info_age = 48h
chunk_total_size = 50G

you can pre-cache the mount. and re-cache it on the fly, as needed.
Google Drive, Plex, Windows 10 - #5 by VBB

I read your given post and found these
--attr-timeout 1000h --dir-cache-time 1000h --poll-interval 5m
This should keep the attributes, dir structure cached for 1000hours and check for changes to these once every 5 minutes, correct?

And I saw the line that says
C:\rclone\rclone rc vfs/refresh recursive=true --timeout 10m
Which seems to update the dir cache completely? or does it "generates" the structure by going through it all?
I am a little confused here on this command.

I will test out the pointers that asdffdsa provided and this time will use -vv or -v to gather logs as tests go along.

correct,
if you want to update the cache for just a subfolder, you can do that
https://rclone.org/rc/#vfs-refresh

what i do is

  1. add media to remote.
  2. pre-cache the mount.
  3. have jellyfin scan the mount.
  4. be happy....

i do not use gdrive for streaming.
my advice is to provide @Animosity022, with all the info he requests.

he was written the goto guide for gdrive and plex.
https://github.com/animosity22/homescripts
https://github.com/animosity22/homescripts/blob/master/systemd/rclone.service

1 Like

Oh! I see, so this is probably why my HTTP server was timing out.
I need to mount > pre-cache it all > then test it out

my advice is to provide @Animosity022, with all the info he requests.

I Will do, will start testing and try pre-cache and then get back here with logs on what can be improved, perhaps even a test link (if it is allowed to provide link to the test file server)

good,
if you post again, make sure to include your config file redacting id/pwd, and the exact mount command.

Current rclone command

set rcloneflags=--allow-other ^
--vfs-cache-max-age 6h ^
--no-modtime ^
--attr-timeout 8700h ^
--buffer-size 512M ^
--cache-dir "E:\rclone\cachedir" ^
--dir-cache-time 8760h ^
--poll-interval 30s ^
--drive-chunk-size 32M ^
--timeout 2h ^
--vfs-cache-mode full ^
--vfs-read-chunk-size 128M ^
--vfs-cache-max-age=24h ^
--vfs-read-chunk-size-limit 2G ^
-v ^
--progress ^
--drive-client-id "ID-HERE" ^
--drive-client-secret "SECRET-HERE" ^
rclone mount teamdrive "E:\teamdrive" %rcloneflags%

Config

[teamdrive]
type = drive
scope = drive
service_account_file = accounts/PublicSyncs-366.json
service_account_file_path = accounts/
root_folder_id = ID-HERE
[cache]
remote = teamdrive:
chunk_size = 128M
info_age = 48h
chunk_total_size = 50G

Observation
When ran the command ran

Logs

Running: rclone rc vfs/refresh recursive=true --timeout 10m

rclone rc vfs/refresh recursive=true --timeout 10m -vv
2021/01/25 20:40:12 DEBUG : rclone: Version "v1.53.4" starting with parameters ["rclone" "rc" "vfs/refresh" "recursive=true" "--timeout" "10m" "-vv"]
2021/01/25 20:40:13 DEBUG : 2 go routines active
2021/01/25 20:40:13 Failed to rc: connection failed: Post "http://localhost:5572/vfs/refresh": dial tcp [::1]:5572: connectex: No connection could be made because the target machine actively refused it.

Running the mount bat and trying to access via my HTTP server:

2021/01/25 20:53:14 INFO  : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)
The service rclone has been started.
2021/01/25 20:54:14 INFO  : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2021/01/25 20:55:14 INFO  : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2021/01/25 20:56:14 INFO  : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)

That's all that shows up before my HTTP server times out.
Reminding: It does work if I mount a small subfolder.

Debug logs are here

So by the looks into debug logs I think my issue is that the vfscache isnt fully cached.

if you want to run the pre-cache command, then you need to --rc to the mount command

rclone --rc mount teamdrive "E:\teamdrive" %rcloneflags%
Like this?
Do I run this each time rclone starts or one time?

yes, like that, each time you run a rclone mount

Okay, dumb question but
If I run it each time - rclone now caches the structure then readies up.
Would next restart also do this from scratch or will this be cached to disk somewhere and just refresh that?

a rclone mount itself does not pre-cache, also called priming the mount
to prime the mount, you have to run rclone rc vfs/refresh

as for the next restart and the state of the dir cache, i have no idea.
each time i mount, i always prime.
each time i need jellyfin to scan for new media, i prime the mount first.

Okay, understood what is --rc

Now Im getting this

021/01/25 22:29:27 DEBUG : pacer: Rate limited, increasing sleep to 1.211007346s
2021/01/25 22:29:27 DEBUG : pacer: Reducing sleep to 0s
2021/01/25 22:29:27 DEBUG : pacer: low level retry 2/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=, userRateLimitExceeded)
2021/01/25 22:29:27 DEBUG : pacer: Rate limited, increasing sleep to 1.204338965s
2021/01/25 22:29:27 DEBUG : pacer: Reducing sleep to 0s

Edit: Got this on rc refresh.

rclone rc vfs/refresh recursive=true --timeout 10m
{
"result": {
"": "OK"
}
}

Time to test

that pacer is to be expected, as gdrive has lots of throttling and limits.
so rclone on-the-fly will change its behavior as needed.
as per the the log entry adjust project quota limits, that might need a tweak.

All right.
Update: after the vfs cache, it works.
IT WORKS, pages load and feels normal to browse.

Now comes the question of

  • Improve cache
  • Keep each file 24hours in cache
  • Max cache size

good, it is working.

at this point, take some time and read the docs...

Good, thanks for sharing!
i had similar problem, now everything is OK

1 Like

good to know...

1 Like