Status about using Rclone for music storage / playback in 2021. Access times improved?

What is the problem you are having with rclone?

It's 2021, and even with VFS cache full mode enabled, and allowing a lof of cache to cache the music files, the access times still makes this rclone mount option unusable.

What is your rclone version (output from rclone version)

rclone v1.55.1

  • os/type: windows
  • os/arch: amd64
  • go/version: go1.16.3
  • go/linking: dynamic
  • go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

Google drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone mount --log-file="C:\logs\rclone.log" --verbose --transfers=64 --checkers=120 --buffer-size 32M --dir-cache-time 120h --vfs-cache-mode full --vfs-read-chunk-size 16M --vfs-cache-poll-interval 1h --vfs-read-chunk-size-limit off --vfs-cache-max-size 500G --vfs-cache-max-age 480h --cache-dir F:/cache/ gcrypted:music_library/ X:

A log from the command with the -vv flag

The logs only do list the files which are been removed from the cache as it do reach 500gb. One example:

2021/11/22 12:19:33 INFO : vfs cache RemoveNotInUse (maxAge=0, emptyOnly=false): item 10-2019/ FLAC/folder.jpg was removed, freed 11129828 bytes
2021/11/22 12:19:33 INFO : vfs cache: cleaned: objects 20084 (was 20221) in use 3, to upload 0, uploading 0, total size 499.992G (was 502.357G)

The problem is that, even with the above settings, using a decent cache size, and a small read chunk, it takes about a minute to even list the files inside a folder! This is a recurrent problem that many users already did talk about here, but never a solution was figured out. Is rclone team working on improving the access times of smaller files? Or did we finally figure out a way to improve the mount settings in order to get this to work ? I'm really eager to be able to use my cloud files with my favorite music manager.

FYI, the cache is located on a nvme drive. I really don't understand why everything gets so slow when trying to store music on gdrive. Could we eventually figure out a way to download and store the music files in advance during the scan ? and definitely improve the access times to these files?

Thanks !!! :slight_smile:

Some odd settings overall.

You have an old version as you'd want to update that.

64 transfers would be Google Drive to a halt with small files. You really want to use the defaults.

Does nothing on a mount.

You don't have any log file so we can't tell what your challenge is.

Access times for cloud storage is always going to have some lag time as it has to go to the cloud. It takes a minute as you have a lot of small files probably and you want to prime your mount so you have the metadata cached.

Best to try remove much of those settings, use defaults and share a debug log.

Hi Animosity,

As usual, thank you for still being there answering all these questions! :slight_smile:

I updated rclone to the latest version, and removed the transfers=64 option. But, Iwas expecting to have 64 transfers in parallel actually helping to improve the scan times, when scanning a music library with my music manager (the files are been read to analyze the spectrum). Can you please tell me why increasing the amount of max simultaneous transfers halt google drive to handle small files ?

I removed the checkers option as well, but one more time, I was expecting this to speedup things. I did certainly misunderstand what this option actually does.

I just restarted rclone with -vv option. I will paste the logs as soon I'm getting them.

EDIT :

the logs clearly states that I'm reaching google API limits....this explains the extreme sowness.

2021/11/22 13:40:16 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and adjust limits in the API Console: Google Cloud Platform, userRateLimitExceeded)
2021/11/22 13:40:16 DEBUG : pacer: Rate limited, increasing sleep to 1.747725656s

strangely, I do check the queries API limits in the console, and I'm at approx 2k queries per 100 seconds. far away from the 20k limit. This is so weird.

transfers is only for uploading and nothing to do with scan times.

With Drive API, you can only upload 2-3 files per second roughly as that's just how it is. If try to hammer it, you are basically 'flooding the engine with gas' and you'll make things much slower as you'll get rate limited.

Checkers isn't used on a mount as it's used in other operations (copy/sync/etc). See previous comment as if you make it huge on those as well, you'll get rate limited and slow things down even more. More isn't always better.

Let's get a couple concepts / use cases out of the way.

Say you have a mount and you first mount it. The first access time to walk the file tree is going to be slow as there isn't anything in cache. Most folks (me included) prime up the mount after I mount it by caching all the metadata with a post command.

On Linux, it looks like:

ExecStartPost=/usr/bin/rclone rc vfs/refresh recursive=true --rc-addr 127.0.0.1:5572 _async=true

So once that finishes, if I scan the mount for all the files, I can see it comes back in less a second.

felix@gemini:/GD$ time find . | wc -l
25086

real	0m0.739s
user	0m0.042s
sys	0m0.091s

If I kill my dir-cache and run the same command without a prime, you can see pretty different results as it has to walk the file system and it does that sequentially and it takes quite some time. I can't say exactly how this behavior happens on Windows as @VBB I thought mentioned it's still slow in Plex even after cached (I don't recall offhand).

felix@gemini:/GD$ kill -HUP 664
felix@gemini:/GD$ time find . | wc -l
25026

real	13m21.540s
user	0m0.301s
sys	0m0.892s

Pretty big difference.

the difference is indeed impressive! thanks for putting this in a real life situation.

What I do not understand though, is what the cache is for then. I do allow 500GB of cache, so all the files should be downloaded and located in the cache, right ?

How can I "prime up" a mount on a windows machine ? and what is it about ? My main goal seems to be exacty what you are doing, meaning the initial scan of all the folders and files will take a while (and will most likely reach the API limit...as it will scan a LOT of files). But that beeing done, I want to keep a full list of these files, at least to "fake" the OS and make it think these are local, reachable files.

So, could you eventually help me out here and let me know what I have to change in my mount line in order to keep all the metadata once for all ?

Thanks again for your help !

We need to break it apart as some of the terms are reused a bit here/there.

When you list out a directory, you have that metadata which is the directory structure and the file information (size/modetime/name/etc). That is all governed by --dir-cache-time if nothing changes. That's all in memory and is empty when you start back up a mount.

Next part would be accessing data in a file. That's done by the vfs-cache-mode full you have and in general uses sparse files which means if you grab a tiny part of the file, that's all that lives in the cache-dir area. Repeated access for the same data comes from cache and would be fast. I don't use Plex for music so not exactly sure how it works but I'd imagine it's pretty similar as it has to get the information from the file which requires some read access.

Now, we continue down a path that's a bit confusing. You have API quotas which you can see in your console. Those are insanely huge and hitting them really doesn't matter much as rclone backs down and handles that without issue. You don't want to repeatedly hit them as it does slow things down so there's a balance. Those are the defined items you can see and are API hits.

You also have download and upload quotas per 24 hours. 750GB is the documented upload quota. If you hit this, you can't upload anymore and you wait for reset.

Everything else won't be shared from Google so your guess is as good as mine. Some say 10TB download total. Some say per file limits. Some say etc as it's not documented so folks just guess based on their experiences.

I don't use Windows. I don't use Music on Plex. You'd want to start basic, get a log and share the log and we can see what's going on.

This used to be a huge issue for me, with scan times of more than two hours, even after priming the mount. With the new Plex scanners/agents, this went down to less than five minutes. And I did not even have to re-organize my folder structure! :wink: Funny, because many of us were blaming Explorer...

@silkyclouds - Like @Animosity022 said, you want to use the vfs/refresh command to prime the mount, regardless of cache or no cache. Whatever you end up doing, I would discourage you from using Plex's sonic analysis, just like I wouldn't use intro detection, video/chapter thumbnails, etc. with cloud storage-based media.

EDIT: Note that I do not use a cache.

1 Like

Ok things are getting clear!

Now, stupid question, but how should I run a post command under windows?
I believe i could simply keep the mount with current parameters and then use animosity's post command as is. Right?

Would it be possivle to simply pause the batch after the rclonde mount, and run this extra rclone command from the same batch after, lets say, 15 seconds wait time?

In Linux, I fire it and forget it as I don't want to wait for it to finish. I'd assume that works the same on Windows if you tagged at the end of your batch file or however you start it up.

You can run it anytime as well as it doesn't have any prereqs.

on boot time, i run a scheduled task that looks like this
--- mount.remote.cmd is the script that runs the rclone mount
--- refresh.cmd is the script that does the rclone rc vfs/refresh command.

start C:\data\rclone\scripts\mounts\mount.remote.cmd
timeout /t 60 /nobreak
call C:\data\rclone\scripts\rr\mounts\refresh.cmd

1 Like

And I do everything manually :wink:

Here are the three batch files that I use. The first one is a read-only mount, which stays up until I get a chance to update it with a newer (beta) version of Rclone. So, if no one is watching at the time, I do this about once a day:

@echo off
title Rclone Mount READ ONLY
D:\Programs\Rclone\rclone mount --attr-timeout 5000h --dir-cache-time 5000h --drive-pacer-burst 200 --drive-pacer-min-sleep 10ms --no-checksum --poll-interval 0 --rc --read-only --user-agent ******* --vfs-read-chunk-size 1M -v Google_Drive_Crypt: G:
pause

Note the --rc flag, as that is necessary for the prime command to find the mount. And here it is:

@echo off
title Rclone Prime
D:\Programs\Rclone\rclone rc vfs/refresh recursive=true --drive-pacer-burst 200 --drive-pacer-min-sleep 10ms --timeout 30m --user-agent
pause

I run the above about once a day, usually after all uploads are done and I'm about to run a Plex scan. Lastly, here's my read-write mount, which I use for moving things around after uploading:

@echo off
title Rclone Mount READ/WRITE
D:\Programs\Rclone\rclone mount --drive-pacer-burst 200 --drive-pacer-min-sleep 10ms --user-agent ******* -v Google_Drive_Crypt: F:
pause

EDIT: I use all three from within simple Windows batch files.

Thank you for this !

I will try to use the same settings. But will keep a 500gb vfs cache as I do have the luxury to have the nvme space to dedicate to this. This to simply read directly from the nvme drive if possible.

Your vfschunk size of 1M is surprising. I did put 16M, as the mount that actually generates the issue only has FLAC and MP3 files. 16M seemed to be the right "spot". But lets give a chance to 1M for the sake of testing !

I do start all my batch files as services I create. So I will simply update my batch file I'm currently using and put a pause as @asdffdsa suggested!

one very last question, I did see @Animosity022 is using the webUI for some reason on the default port, is this useful in any ways? I don't need to serve my files through http at al, right ?

Thanks again ! your helps is MUCH appreciated ! :slight_smile:

I only use this temporarily as I'm having Plex analyze media overnight. This one "Upgrade media analysis during maintenance", not "Perform extensive media analysis during maintenance". You should be fine with the default 16M.

ok so here we go, I did update my .bat file and here is its content for now :

@echo off
title Rclone Music Mount READ/WRITE
rclone mount --log-file="C:\logs\rclone.log" -v --attr-timeout 5000h --dir-cache-time 5000h --drive-pacer-burst 200 --drive-pacer-min-sleep 10ms --no-checksum --poll-interval 0 --rc --vfs-cache-mode full --vfs-read-chunk-size 1M --vfs-cache-poll-interval 1h --vfs-read-chunk-size-limit off --vfs-cache-max-size 500G --vfs-cache-max-age 5000h --cache-dir F:/cache/ gcrypted:music_library/ X:
timeout /t 60 /nobreak
rclone rc vfs/refresh recursive=true --drive-pacer-burst 200 --drive-pacer-min-sleep 10ms --timeout 30m --user-agent
pause

As soon I start it, I see it uses a lot of cpu, which seems to be a good thing :

I can see in the logs tat the rc service was started :
2021/11/23 08:38:57 NOTICE: Serving remote control on http://localhost:5572/
here is what this URL reports though :

and now a little extract of my logs (I do put extracts as it do delete 50k folders after the first few seconds, growing the size considerably!

2021/11/23 08:46:37 DEBUG : rclone: Version "v1.57.0" starting with parameters ["rclone" "mount" "--log-file=C:\logs\rclone.log" "-vv" "--attr-timeout" "5000h" "--dir-cache-time" "5000h" "--drive-pacer-burst" "200" "--drive-pacer-min-sleep" "10ms" "--no-checksum" "--poll-interval" "0" "--rc" "--vfs-cache-mode" "full" "--vfs-read-chunk-size" "1M" "--vfs-cache-poll-interval" "1h" "--vfs-read-chunk-size-limit" "off" "--vfs-cache-max-size" "500G" "--vfs-cache-max-age" "5000h" "--cache-dir" "F:/cache/" "gcrypted:music_library/" "X:"]
2021/11/23 08:46:37 NOTICE: Serving remote control on http://localhost:5572/
2021/11/23 08:46:37 DEBUG : Creating backend with remote "gcrypted:music_library/"
2021/11/23 08:46:37 DEBUG : Using config file from "C:\Users\meaning\.config\rclone\rclone.conf"
2021/11/23 08:46:37 DEBUG : Creating backend with remote "gdrive:gdrive-crypted/tdruus33sc6ecfmdm5mle2hq90"
2021/11/23 08:46:37 DEBUG : gdrive: detected overridden config - adding "{cHldw}" suffix to name
2021/11/23 08:46:38 DEBUG : fs cache: renaming cache item "gdrive:gdrive-crypted/tdruus33sc6ecfmdm5mle2hq90" to be canonical "gdrive{cHldw}:gdrive-crypted/tdruus33sc6ecfmdm5mle2hq90"
2021/11/23 08:46:38 DEBUG : fs cache: switching user supplied name "gdrive:gdrive-crypted/tdruus33sc6ecfmdm5mle2hq90" for canonical name "gdrive{cHldw}:gdrive-crypted/tdruus33sc6ecfmdm5mle2hq90"
2021/11/23 08:46:38 DEBUG : vfs cache: root is "F:\cache"
2021/11/23 08:46:38 DEBUG : vfs cache: data root is "\\?\F:\cache\vfs\gcrypted\music_library"
2021/11/23 08:46:38 DEBUG : vfs cache: metadata root is "\\?\F:\cache\vfsMeta\gcrypted\music_library"
2021/11/23 08:46:38 DEBUG : Creating backend with remote "F:/cache/vfs/gcrypted/music_library/"
2021/11/23 08:46:38 DEBUG : fs cache: renaming cache item "F:/cache/vfs/gcrypted/music_library/" to be canonical "//?/F:/cache/vfs/gcrypted/music_library/"
2021/11/23 08:46:38 DEBUG : Creating backend with remote "F:/cache/vfsMeta/gcrypted/music_library/"
2021/11/23 08:46:38 DEBUG : fs cache: renaming cache item "F:/cache/vfsMeta/gcrypted/music_library/" to be canonical "//?/F:/cache/vfsMeta/gcrypted/music_library/"
2021/11/23 08:47:00 ERROR : Local file system at //?/F:/cache/vfs/gcrypted/music_library/: Failed to list "10-2019/Alex Falk - OOF (WEB FLAC 24) ": directory not found
2021/11/23 08:47:01 ERROR : Local file system at //?/F:/cache/vfs/gcrypted/music_library/: Failed to list "10-2019/Rembert De Smet & Ferre Baelen - Le Mystérieux EP [FLAC] ": directory not found
2021/11/23 08:47:01 ERROR : Local file system at //?/F:/cache/vfs/gcrypted/music_library/: Failed to list "10-2019/Various Artists - Best of Klassik 2018 - Die grosse Gala der OPUS KLASSIK-Preistrager ": directory not found
2021/11/23 08:47:01 INFO : 10-2019/Various Artists - Best of Klassik 2018 - Die grosse Gala der OPUS KLASSIK-Preistrager : Removing directory
2021/11/23 08:47:01 ERROR : 10-2019/Various Artists - Best of Klassik 2018 - Die grosse Gala der OPUS KLASSIK-Preistrager : Failed to rmdir: remove \?\F:\cache\vfs\gcrypted\music_library\10-2019\Various Artists - Best of Klassik 2018 - Die grosse Gala der OPUS KLASSIK-Preistrager␠: The system cannot find the file specified.
2021/11/23 08:47:01 ERROR : Local file system at //?/F:/cache/vfs/gcrypted/music_library/: vfs cache: failed to remove empty directories from cache path "": remove \?\F:\cache\vfs\gcrypted\music_library\10-2019\Various Artists - Best of Klassik 2018 - Die grosse Gala der OPUS KLASSIK-Preistrager␠: The system cannot find the file specified.
2021/11/23 08:47:01 ERROR : Local file system at //?/F:/cache/vfsMeta/gcrypted/music_library/: Failed to list "10-2019/Alex Falk - OOF (WEB FLAC 24) ": directory not found
2021/11/23 08:47:03 ERROR : Local file system at //?/F:/cache/vfsMeta/gcrypted/music_library/: Failed to list "10-2019/Rembert De Smet & Ferre Baelen - Le Mystérieux EP [FLAC] ": directory not found
2021/11/23 08:47:03 ERROR : Local file system at //?/F:/cache/vfsMeta/gcrypted/music_library/: Failed to list "10-2019/Various Artists - Best of Klassik 2018 - Die grosse Gala der OPUS KLASSIK-Preistrager ": directory not found
2021/11/23 08:47:03 INFO : 10-2019/Various Artists - Best of Klassik 2018 - Die grosse Gala der OPUS KLASSIK-Preistrager : Removing directory
2021/11/23 08:47:03 ERROR : 10-2019/Various Artists - Best of Klassik 2018 - Die grosse Gala der OPUS KLASSIK-Preistrager : Failed to rmdir: remove \?\F:\cache\vfsMeta\gcrypted\music_library\10-2019\Various Artists - Best of Klassik 2018 - Die grosse Gala der OPUS KLASSIK-Preistrager␠: The system cannot find the file specified.
2021/11/23 08:47:03 ERROR : Local file system at //?/F:/cache/vfs/gcrypted/music_library/: vfs cache: failed to remove empty directories from metadata cache path "": remove \?\F:\cache\vfsMeta\gcrypted\music_library\10-2019\Various Artists - Best of Klassik 2018 - Die grosse Gala der OPUS KLASSIK-Preistrager␠: The system cannot find the file specified.
2021/11/23 08:47:03 DEBUG : Network mode mounting is disabled
2021/11/23 08:47:03 DEBUG : Mounting on "X:" ("gcrypted music_library")
2021/11/23 08:47:03 DEBUG : Encrypted drive 'gcrypted:music_library/': Mounting with options: ["-o" "attr_timeout=1.8e+07" "-o" "uid=-1" "-o" "gid=-1" "--FileSystemName=rclone" "-o" "volname=gcrypted music_library"]
2021/11/23 08:47:03 DEBUG : Encrypted drive 'gcrypted:music_library/': Init:
2021/11/23 08:47:03 DEBUG : Encrypted drive 'gcrypted:music_library/': >Init:
2021/11/23 08:47:03 DEBUG : /: Statfs:

and after the initial start, it then starts to "list" all my files and folders I guess which shows like this :

2021/11/23 08:50:08 DEBUG : /07-2021/(APD 43) dISHARMONY - dark-Live-fest (2021) WEB/TrackerMetadata: Releasedir: fh=0xF
2021/11/23 08:50:08 DEBUG : /07-2021/(APD 43) dISHARMONY - dark-Live-fest (2021) WEB/TrackerMetadata: >Releasedir: errc=0
2021/11/23 08:50:08 DEBUG : /07-2021/(APD 43) dISHARMONY - dark-Live-fest (2021) WEB/TrackerMetadata: Getattr: fh=0xFFFFFFFFFFFFFFFF
2021/11/23 08:50:08 DEBUG : /07-2021/(APD 43) dISHARMONY - dark-Live-fest (2021) WEB/TrackerMetadata: >Getattr: errc=0
2021/11/23 08:50:08 DEBUG : /07-2021/(APD 43) dISHARMONY - dark-Live-fest (2021) WEB/TrackerMetadata: Getattr: fh=0xFFFFFFFFFFFFFFFF
2021/11/23 08:50:08 DEBUG : /07-2021/(APD 43) dISHARMONY - dark-Live-fest (2021) WEB/TrackerMetadata: >Getattr: errc=0
2021/11/23 08:50:08 DEBUG : /07-2021/(APD 43) dISHARMONY - dark-Live-fest (2021) WEB/TrackerMetadata: Opendir:
2021/11/23 08:50:08 DEBUG : /07-2021/(APD 43) dISHARMONY - dark-Live-fest (2021) WEB/TrackerMetadata: OpenFile: flags=O_RDONLY, perm=-rwxrwxrwx
2021/11/23 08:50:08 DEBUG : /07-2021/(APD 43) dISHARMONY - dark-Live-fest (2021) WEB/TrackerMetadata: >OpenFile: fd=07-2021/(APD 43) dISHARMONY - dark-Live-fest (2021) WEB/TrackerMetadata/ (r), err=
2021/11/23 08:50:08 DEBUG : /07-2021/(APD 43) dISHARMONY - dark-Live-fest (2021) WEB/TrackerMetadata: >Opendir: errc=0, fh=0xF
2021/11/23 08:50:08 DEBUG : /09-2019/(2019 - WEB - 24bitFLAC) DJ Ladybarn - Phonk In Tha Garage/TrackerMetadata: Readdir: ofst=0, fh=0x1E
2021/11/23 08:50:08 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and adjust limits in the API Console
2021/11/23 08:50:08 DEBUG : pacer: Rate limited, increasing sleep to 1.053817085s

here we can see that the API limit was reached a few seconds after I started rclone.

So, will the refresh thing now "cache" my files and avoid me to reach the API limits once everything will be put in cache (for real?) ? and should I be worried because I get that "not found" message when connecting 127.0.0.1 ?

Also, if I understand this correctly, the initial scan of all the files and folders will now happen, reaching the API limit, but the folder and file structure (metadata) should now be stored somewhere, right ? Sorry I am still a little bit confused about how this actually do work.

some more log extracts, it seems like it is opening and listing all the subfolders and files ?

2021/11/23 14:54:40 DEBUG : /09-2019: >Opendir: errc=0, fh=0x2A
2021/11/23 14:54:40 DEBUG : /04-2021: Getattr: fh=0xFFFFFFFFFFFFFFFF
2021/11/23 14:54:40 DEBUG : /04-2021: >Getattr: errc=0
2021/11/23 14:54:40 DEBUG : /04-2021: Getattr: fh=0xFFFFFFFFFFFFFFFF
2021/11/23 14:54:40 DEBUG : /04-2021: >Getattr: errc=0
2021/11/23 14:54:40 DEBUG : /04-2021: Opendir:
2021/11/23 14:54:40 DEBUG : /04-2021: OpenFile: flags=O_RDONLY, perm=-rwxrwxrwx
2021/11/23 14:54:40 DEBUG : /04-2021: >OpenFile: fd=04-2021/ (r), err=
2021/11/23 14:54:40 DEBUG : /04-2021: >Opendir: errc=0, fh=0x2C
2021/11/23 14:54:40 DEBUG : /09-2021/Roger Doyle - iGIRL - Act Two (2021) [FLAC]: Readdir: ofst=0, fh=0x5

Does this mean that after the full opening, scan, storing of the cache (vfs/reshresh thing), everything should be fluid when opening folders and listing files from now on ?

Are you using your own client ID/secret? Those are just API quota limits which aren't bad. I generally suggest you really want to push the API as best you can as it's a no harm / no foul. If you over push it, you get throttled and that slows things down so more is not always better.

Back to a previous point as I'm not a Windows guy but generally if you run that refresh, it makes file listing 'speedy' as on Linux it is instant.

Once the metadata is in memory, you have file reading. Once that's cached, it should be like local disk just about.

The goal in Plex is usually just to wait out those scans so all your media get analyzed and from that point on, life is usually good.

yes I did configure my own project and the limit is 20k hits per 100 seconds.

and, cool ! so I let Roon (Not using plex but Roon as music manager) scan everything again (it actually is going through all the files while we speak, generating a crazy big log file), and after everything has been scanned, I guess the next scan will then be instant, as the files will be "cached".

Can I see somewhere where these files list is being cached?

Thanks again for your greaaaaat help :slight_smile:

You'd see the files there.

This is confusing me even more :smiley:

image

I already did have this vfsMeta (aside the vfs where the actual, downloaded files are... 500gb...) but all the folders what where there where empty.

What is the vfs/refresh doing more then ? this is what I'm not getting.

I did just check the contact of vfsMeta, and I do see the folders, all of them. But one more time, whatever folder I do open, it is empty. Just one example:

here you can see not file seems to be listed at all. am I doing something wrong ?
as explained, this vfsMeta folder was already existing before I added the prime thing.

You have two 'caches' if you will.

You have a directory / file structure (your tree). That's influenced by --dir-cache-time and is a memory only thing. The refresh loads all this into memory so walking your directory tree is fast. It's lost on restart and expires based on changes and/or the value in --dir-cache-time

It's documented here:

https://rclone.org/commands/rclone_mount/#vfs-directory-cache

Second, you have a file cache that you've configured with vfs cache mode full. That makes files you access cached locally.

That's documented here:

https://rclone.org/commands/rclone_mount/#vfs-cache-mode-full