Rclone mount cache seems to be rebuilt despite being present?

What is the problem you are having with rclone?

Hello rclone community,

I am new to rclone and hoping you could help me with my particular challenge.

We have an S3 bucket in AWS that we would like to mount (and have read-only access) on an AWS AppStream image on Windows Server 2019.

Just to clarify, the mounting of S3 bucket does work. The only thing is that, the first time it tries to access the S3 bucket (and launch our application which is stored in the S3 bucket), it does take between 8-10s. But subsequent launches are quick to do. I suppose this is expected because the first time round, it will try to build the cache (and we're getting the 8-10s hit), whereas for subsequent launches, it would read from the cache, and the application launches almost immediately.

Coming onto my challenge.. What I would like to do is to reduce the wait time for the cache to be built everytime a new appstream session starts. AWS Appstream have a concept of application persistence where the user profile is stored on a VHD (and mounted on the appstream session). Noting that an appstream instance would always be started from a golden image (in our case, a vanilla windows 2019 image), if we were to store the cache in e.g. c:\temp, that would always need rebuilding everytime a new appstream instance is started (because the golden image wouldn't have this). However, since we enable application persistence, we could store it under C:\Users\PhotonUser\AppData\Local\rclone (i.e. the mounted VHD), so this will persist onto the next appstream instance (if the current instance is terminated).

From what I'm seeing, the cache does get retained when I terminate and start a new appstream session. However, looking at the property of the item (e.g. appclient.uar), I could see that the created date/time doesn't change, but the modified date/time changes everytime a new appstream session is launched. And the same behaviour where it still takes 8-10s to load the application on every appstream session (but almost instant for subsequent launches) is still observed despite the cache being stored in the mounted VHD - I terminated and started a new session within 3 minutes. I suspect that the cache is being rebuilt on each appstream session.

How do I go about speeding up the launch of my application at the first launch of an appstream instance (if there is already a cache present)? E.g. Is there a way to read from the cache without rebuilding it if it already exists? And how would it detect if changes have been made on the S3 bucket (which may not be that often)?

Hope to hear back

Many thanks,
James

Run the command 'rclone version' and share the full output of the command.

rclone v1.59.0

  • os/version: Microsoft Windows Server 2019 Datacenter 1809 (64 bit)
  • os/kernel: 10.0.17763.3046 (x86_64)
  • os/type: windows
  • os/arch: amd64
  • go/version: go1.18.3
  • go/linking: static
  • go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

AWS S3 bucket

The command you were trying to run (eg rclone copy /tmp remote:tmp)

c:\rclone\rclone.exe --config c:\rclone\rclone.conf mount sits_s3:mybucketomount S: --vfs-cache-mode full --network-mode --file-perms 0557 --read-only --no-modtime --buffer-size 0 --vfs-fast-fingerprint --cache-dir C:\Users\PhotonUser\AppData\Local\rclone --s3-profile appstream_machine_role

The rclone config contents with secrets removed.

[s3]
type = s3
provider = AWS
env_auth = true
region = ap-southeast-2
location_constraint = ap-southeast-2
server_side_encryption = aws:kms
sse_kms_key_id = <SOME_ARN>
storage_class = STANDARD

A log from the command with the -vv flag

I was about to include it, but it errored saying that the body of the message cannot have more than 32000 characters (and I had 2832000+ characters when I included the logs).. Can someone please let me know how I'd be able to upload the logs without exceeding the character limits? Thank you

It's written in the template on how to include a log file:

You should use 3 backticks to begin and end your paste to make it readable. Or use a service such as https://pastebin.com or https://gist.github.com/

Hi Animosity022,

Thanks for the guidance. I wasn't familiar with the services, but have given https://gist.github.com/ a go now :slight_smile:

Logs can be found here now (hope this works), thanks

Kind regards,
James

I'm not sure what the app is doing but if it's trying to read the whole file before starting, that would probably make sense if it takes some time.

If you check for:

2022/08/01 07:51:53 DEBUG : vfs cache: looking for range={Pos:0 Size:4096} in [] - present false

That means the part is what looking for is not in memory and it has to download it.

2022/08/01 07:51:53 DEBUG : mydir/uniface/bin/uniface.exe(0xc000a08dc0): _readAt: size=5120, off=6656
2022/08/01 07:51:53 DEBUG : vfs cache: looking for range={Pos:6656 Size:5120} in [{Pos:0 Size:32768}] - present true

That means it came from local disk cache and was there so it reads it locally.

This is the part you want to check out:

rclone mount

Defaults are unlimited size and after one hour, it is evicted from the cache.

--vfs-cache-max-age duration         Max age of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix      Max total size of objects in the cache (default off)

I use a size and a new max age as I want to keep everything as long as possible so I use 9999h and on my disk, I have a size setup of about 700GB of local disk cache I use for rclone.

Looking through your log, it appears that restoring the cache isn't working properly, for example

2022/08/01 07:51:53 DEBUG : mydir/uniface/bin/uniface.exe(0xc000a08dc0): >openPending: err=<nil>
2022/08/01 07:51:53 DEBUG : vfs cache: looking for range={Pos:0 Size:4096} in [] - present false

Which kind of indicates the file isn't in the cache.

Also on startup we see no objects in the cache which is a bit more obvious.

2022/08/01 07:51:39 INFO  : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)

If you can save and restore --cache-dir C:\Users\PhotonUser\AppData\Local\rclone then the cache should persist between runs of rclone.

Thanks Animosity022 and Nick for helping me go through the logs and share your thoughts.

Animosity022 mentioned:

I'm not sure what the app is doing but if it's trying to read the whole file before starting, that would probably make sense if it takes some time.

If you check for:

2022/08/01 07:51:53 DEBUG : vfs cache: looking for range={Pos:0 Size:4096} in [] - present false

That means the part is what looking for is not in memory and it has to download it.

Just so that I understand, if the cache files are there, but not loaded onto memory, would it still take the same time for the application to launch if the cache files isn't there? A new appstream session always give me a fresh Windows instance, meaning that the files won't be in the memory on new appstream sessions - but the cache files are definitely mounted on the appstream image and accessible.

Nick also suggested that the file isn't in the cache - but I can definitely see the files/folders under C:\Users\PhotonUser\AppData\Local\rclone. I've double checked this by logging into a new appstream instance without executing/mounting rclone. I.e. The folders and files definitely exists in the cache, but for some reason, as Nick suggested, it does not appear to be read/working properly for some reason.

To give more background, when an appstream instance starts (and before the user is connected), a setup script would be run the setup the user session. In my case, the setup script would install winfsp and rclone, put the rclone.conf and run the rclone command to mount the S3 bucket. The VHDX that contains the cache files/folder is also mounted as part of the process. These process runs on every new appstream instance (as they come from a fresh Windows 2019 image). What might be of interest here is (and not sure if this might be attributing to the problem), when the setup script runs, this is executed with elevated privs (and probably as admin user). But when the user session starts, the user will not have any admin rights. So I'm wondering if the cache cannot be read (for some reason) because the user don't have permission to access the cache (because the rclone mount is done through the setup script. i.e. possibly as another user). I wonder what will happen if I give full permission on the cache-dir as part of the setup script. Will give this a go and let you know - but feel free to add your thoughts/suggestions :slight_smile:

Many thanks,
James

I've just tried giving full permissions to all users on the cache-dir, and sadly that doesn't seem to have made any changes in the time it takes to load the app the first time.

New logs can be found here.

Any other thoughts/suggestions?

Many thanks,
James

Can you share the directory tree under the cache directory? It's important that it matches what rclone creates.

Hi Nick,

Sure, this is how the directory tree (of C:\Users\PhotonUser\AppData\Local\rclone) looks like

C:\USERS\PHOTONUSER\APPDATA\LOCAL\RCLONE
+---vfs
¦   +---s3{0uQwe}
¦       +---mybucketomount
¦           +---mydir
¦               +---adm
¦               ¦       text.asn
¦               ¦       uclilogicals.asn
¦               ¦       ucliapp.asn
¦               ¦       
¦               +---app
¦               ¦   ¦   appclient.uar
¦               ¦   ¦   
¦               ¦   +---3gl
¦               ¦   ¦   ¦   libhunspell.dll
¦               ¦   ¦   ¦   siafd32.dll
¦               ¦   ¦   ¦   sifileop.dll
¦               ¦   ¦   ¦   simd5.dll
¦               ¦   ¦   ¦   siqadd32.dll
¦               ¦   ¦   ¦   siqadd60.dll
¦               ¦   ¦   ¦   siradd32.dll
¦               ¦   ¦   ¦   
¦               ¦   ¦   +---mswin64
¦               ¦   ¦           libhunspell_x64.dll
¦               ¦   ¦           sifileop_x64.dll
¦               ¦   ¦           
¦               ¦   +---sup
¦               ¦   ¦   +---resources
¦               ¦   ¦       +---frm
¦               ¦   ¦       ¦       men_yfjs.frm
¦               ¦   ¦       ¦       
¦               ¦   ¦       +---prc
¦               ¦   ¦       ¦       build_text1@system_library.prc
¦               ¦   ¦       ¦       
¦               ¦   ¦       +---sig
¦               ¦   ¦               men_yfjs.sig
¦               ¦   ¦               
¦               ¦   +---ulm
¦               ¦           license10.xml
¦               ¦           
¦               +---uniface
¦                   +---adm
¦                   ¦       usys.asn
¦                   ¦       usys.ini
¦                   ¦       
¦                   +---bin
¦                   ¦       chrome_elf.dll
¦                   ¦       irtl.dll
¦                   ¦       libcef.dll
¦                   ¦       libdlm32.dll
¦                   ¦       uauthlogin.dll
¦                   ¦       ucall.dll
¦                   ¦       uenc.dll
¦                   ¦       ugui.dll
¦                   ¦       ulat.dll
¦                   ¦       ulib.dll
¦                   ¦       umsw.dll
¦                   ¦       uniface.exe
¦                   ¦       upop.dll
¦                   ¦       uprn.dll
¦                   ¦       urtl.dll
¦                   ¦       uwid.dll
¦                   ¦       yrtl.dll
¦                   ¦       
¦                   +---usys
¦                           usys.uar
¦                           
+---vfsMeta
    +---s3{0uQwe}
        +---mybucketomount
			+---mydir
				+---adm
				¦       text.asn
				¦       uclilogicals.asn
				¦       ucliapp.asn
				¦       
				+---app
				¦   ¦   appclient.uar
				¦   ¦   
				¦   +---3gl
				¦   ¦   ¦   libhunspell.dll
				¦   ¦   ¦   siafd32.dll
				¦   ¦   ¦   sifileop.dll
				¦   ¦   ¦   simd5.dll
				¦   ¦   ¦   siqadd32.dll
				¦   ¦   ¦   siqadd60.dll
				¦   ¦   ¦   siradd32.dll
				¦   ¦   ¦   
				¦   ¦   +---mswin64
				¦   ¦           libhunspell_x64.dll
				¦   ¦           sifileop_x64.dll
				¦   ¦           
				¦   +---sup
				¦   ¦   +---resources
				¦   ¦       +---frm
				¦   ¦       ¦       men_yfjs.frm
				¦   ¦       ¦       
				¦   ¦       +---prc
				¦   ¦       ¦       build_text1@system_library.prc
				¦   ¦       ¦       
				¦   ¦       +---sig
				¦   ¦               men_yfjs.sig
				¦   ¦               
				¦   +---ulm
				¦           license10.xml
				¦           
				+---uniface
					+---adm
					¦       usys.asn
					¦       usys.ini
					¦       
					+---bin
					¦       chrome_elf.dll
					¦       irtl.dll
					¦       libcef.dll
					¦       libdlm32.dll
					¦       uauthlogin.dll
					¦       ucall.dll
					¦       uenc.dll
					¦       ugui.dll
					¦       ulat.dll
					¦       ulib.dll
					¦       umsw.dll
					¦       uniface.exe
					¦       upop.dll
					¦       uprn.dll
					¦       urtl.dll
					¦       uwid.dll
					¦       yrtl.dll
					¦       
					+---usys
							usys.uar

Thanks for your help and let me know if you need more information from me

Cheers,
James

Hello,

Just to add.. I've done a comparison on the cache-dir directory tree where

  • I only started the appstream session and don't mount the S3 bucket (i.e. the cache-dir retained from previous session)
  • I launched the application after mounting the S3 bucket on a new appstream session (i.e. the cache-dir is seemingly being rebuilt - which leads to the 8-10s wait time)

And both directory trees match exactly

Many thanks,
James

That directory structure looks good.

Did you try making a session without the cache-dir and see what the rebuilt cache dir looks like? I'm not 100% sure from what you said!

The {....} means you are passing s3 parameters on the comand line.

If the parameters to this flag change, then the hash in the {...} will change.

Hi Nick,

Thanks for persevering with me!

Correct - apologies for my bad attempt at explaining what I did.. If it's any clearer, what I had meant is that I did comparisons against

  • a session where it is a "blank" session - meaning I don't do anything with rclone in that session (but this session would have the cache-dir from the previous appstream session) vs

  • a "normal" session - meaning I mount the s3 bucket using rclone in the session and launch the application (which at the moment, looks like it is rebuilding the cache)

And doing the comparison on the cache-dir works out to be an exact match.

Can you elaborate more on what you mean by

The {....} means you are passing s3 parameters on the comand line.

For my tests, I haven't made any changes to the items in the S3 bucket, nor the rclone commands, nor the s3-profile. And, the value of 0uQwe in {....} has been the same for each session. I guess, if the cache is always being rebuilt because it somehow thinks it is different (e.g. because the value in the {....} is different, then it would make sense... Except, the value in {....} hasn't changed, so I'm still puzzled as to why the cache-dir would always be rebuilt for each session (instead of just reading/using the previous one). To add to this, because I've been running the tests within minutes of each other, I don't think the cache rebuilding is happening due to the (default) value of vfs-cache-max-age

Any other thoughts/suggestions?

Many thanks,
James

What to try next...

I think we can ignore that line of inquiry then...

The value of the vfs-cache-max-age is checked against the time the files were downloaded which is stored in metadata files (you can see this in the vfsMeta - each file in there is a little JSON file).

So if the saved files were old, then they would be immediately expired. You should see this in a DEBUG log (-vv) if it is happening.

I'd really like for this to show non 0 numbers then it shows the cache is being preserved. If things are immediately being expired you'll see logs like this

2022/08/04 10:37:26 INFO  : potato.txt: vfs cache: removed cache file as Removing old cache file not in use
2022/08/04 10:37:26 DEBUG : potato.txt: vfs cache: removed metadata from cache as Removing old cache file not in use
2022/08/04 10:37:26 INFO  : vfs cache RemoveNotInUse (maxAge=3600000000000, emptyOnly=false): item potato.txt was removed, freed 15 bytes
2022/08/04 10:37:26 INFO  : vfs cache: cleaned: objects 0 (was 1) in use 0, to upload 0, uploading 0, total size 0 (was 15)

So if you could concentrate on looking for logs like that in your tests. You only need to start the mount to see that, you don't need to run your app.

Hi Nick,

Thanks again for your help, it has been really helpful and I'm always learning new things :slight_smile:

I've done more tests, and I've got a new log file (log#3) to share.

If you look at the log file, the first 256 lines (or thereabout) are the logs generated from doing the rclone mount. And at line 128, it says

2022/08/05 07:18:42 INFO : vfs cache: cleaned: objects 0 (was 37) in use 0, to upload 0, uploading 0, total size 0 (was 7.030Mi)

Which hopefully, as you say, suggests that the files are being preserved (from previous session that was within the MaxAge threshold).

Now, if we continue to look at the log, at about line 25970, it says

2022/08/05 07:19:42 INFO : vfs cache: cleaned: objects 32 (was 32) in use 14, to upload 0, uploading 0, total size 93.551Mi (was 93.551Mi)

Which seems to suggest that the cache is being cleared down? (sorry, i'm still leaning how to interpret the information in the log file). But why would it be clearing it down though? And strangely, the object counts seems to differ between the first vfs cleaned message (i.e. was 37) and the second vfs cleaned message (i.e. was 32).

If it is of any help, I've repeated the test (which was used to produce log #3), and uploaded log #4. In log #4, lines 1-264 is generated after doing an rclone mount, and the remaining ones are generated after I try to launch the application. In this iteration, for some reason, it tries to delete all of the cache when it mounts the S3 bucket

2022/08/05 07:36:38 INFO : vfs cache: cleaned: objects 37 (was 37) in use 0, to upload 0, uploading 0, total size 7.023Mi (was 7.023Mi)

And cleans it up again when the application is launched

2022/08/05 07:37:38 INFO : vfs cache: cleaned: objects 37 (was 37) in use 0, to upload 0, uploading 0, total size 7.023Mi (was 7.023Mi)

If you look at the time between the runs (for both log #3 and log #4), I assume they are all within the MaxAge threshold value (i.e. the cache shouldn't have been cleared out because it reached the MaxAge threshold)?

Many thanks,
James

Files are being preserved but they are immediately removed. The "objects 0" is the new number of objects and the "(was 37)" is the previous number. So it's gone from 37 to 0.

This is the problem! Increasing the max age of the cache should help.

That says the number of objects stayed the same at 32.

Hi Nick,

Hope you had a good weekend and thanks again for the suggestion. Just to clarify, when you say max age, you are referring to the --vfs-cache-max-age flag?

In any case, I have tried setting the --dir-cache-time, --vfs-cache-max-age and --vfs-cache-max-size flag in my command. i.e.
c:\rclone\rclone.exe --config c:\rclone\rclone.conf mount sits_s3:mybucketomount S: --vfs-cache-mode full --network-mode --file-perms 0557 --read-only --no-modtime --buffer-size 0 --vfs-fast-fingerprint --cache-dir C:\Users\PhotonUser\AppData\Local\rclone --dir-cache-time 1h --vfs-cache-max-age 999h --vfs-cache-max-size 70G --s3-profile appstream_machine_role

And I am still seeing the same result. I.e. although the cache persists between each session, it still appears to be rebuilt at every new sessions (but not within the same session) - this is despite me testing on new sessions within 10mins (i.e. within the vfs-cache-max-age and dir-cache-time threshold).

Do we know what other conditions would trigger the cache to be rebuilt? E.g. does it use some hash to calculate/validate, and would perform a rebuild if it doesn't match? For example, imagine a scenario where I were to zip the cache, transfer it into another machine, unzip the cache and run the same rclone command on the other machine (all of these actions within the max age/dir-cache-time settings).. All else being equal, would you expect the application to read from the cache, or would you expect the cache to be rebuilt?

Many thanks,
James

Yes

Run with -vv and post the debug log up until the moment it prints INFO : vfs cache: cleaned there should be some clues in there.

What you are trying should work, there is just some bit of finger trouble I think!

Hi Nick,

Thanks as always.

I've run the following command
T:\rclone\rclone.exe --config T:\rclone\rclone-mybucketomount.conf mount s3:mybucketomount s: --vfs-cache-mode full --network-mode --file-perms 0557 --read-only --no-modtime --buffer-size 0 --vfs-fast-fingerprint --cache-dir C:\Users\PhotonUser\AppData\Local\rclone --dir-cache-time 1h --vfs-cache-max-age 999h --vfs-cache-max-size 70G --log-file=C:\temp\rclone-mybucketomount.log -vv --s3-profile appstream_machine_role

And the output requested:

2022/08/10 00:32:38 DEBUG : rclone: Version "v1.59.0" starting with parameters ["T:\rclone\rclone.exe" "--config" "T:\rclone\rclone-mybucketomount.conf" "mount" "s3:mybucketomount" "s:" "--vfs-cache-mode" "full" "--network-mode" "--file-perms" "0557" "--read-only" "--no-modtime" "--buffer-size" "0" "--vfs-fast-fingerprint" "--cache-dir" "C:\Users\PhotonUser\AppData\Local\rclone" "--dir-cache-time" "1h" "--vfs-cache-max-age" "999h" "--vfs-cache-max-size" "70G" "--log-file=C:\temp\rclone-mybucketomount.log" "-vv" "--s3-profile" "appstream_machine_role"]
2022/08/10 00:32:38 DEBUG : Creating backend with remote "s3:mybucketomount"
2022/08/10 00:32:38 DEBUG : Using config file from "T:\rclone\rclone-mybucketomount.conf"
2022/08/10 00:32:38 DEBUG : s3: detected overridden config - adding "{0uQwe}" suffix to name
2022/08/10 00:32:38 DEBUG : fs cache: renaming cache item "s3:mybucketomount" to be canonical "s3{0uQwe}:mybucketomount"
2022/08/10 00:32:38 INFO : S3 bucket mybucketomount: poll-interval is not supported by this remote
2022/08/10 00:32:38 DEBUG : vfs cache: root is "C:\Users\PhotonUser\AppData\Local\rclone"
2022/08/10 00:32:38 DEBUG : vfs cache: data root is "\\?\C:\Users\PhotonUser\AppData\Local\rclone\vfs\s3{0uQwe}\mybucketomount"
2022/08/10 00:32:38 DEBUG : vfs cache: metadata root is "\\?\C:\Users\PhotonUser\AppData\Local\rclone\vfsMeta\s3{0uQwe}\mybucketomount"
2022/08/10 00:32:38 DEBUG : Creating backend with remote "C:/Users/PhotonUser/AppData/Local/rclone/vfs/s3{0uQwe}/mybucketomount"
2022/08/10 00:32:38 DEBUG : fs cache: renaming cache item "C:/Users/PhotonUser/AppData/Local/rclone/vfs/s3{0uQwe}/mybucketomount" to be canonical "//?/C:/Users/PhotonUser/AppData/Local/rclone/vfs/s3{0uQwe}/mybucketomount"
2022/08/10 00:32:38 DEBUG : Creating backend with remote "C:/Users/PhotonUser/AppData/Local/rclone/vfsMeta/s3{0uQwe}/mybucketomount"
2022/08/10 00:32:38 DEBUG : fs cache: renaming cache item "C:/Users/PhotonUser/AppData/Local/rclone/vfsMeta/s3{0uQwe}/mybucketomount" to be canonical "//?/C:/Users/PhotonUser/AppData/Local/rclone/vfsMeta/s3{0uQwe}/mybucketomount"
2022/08/10 00:32:38 DEBUG : Network mode mounting is enabled
2022/08/10 00:32:38 DEBUG : Mounting on "s:" ("\server\s3{0uQwe} mybucketomount-heap")
2022/08/10 00:32:38 DEBUG : S3 bucket mybucketomount: Mounting with options: ["-o" "attr_timeout=1" "-o" "uid=-1" "-o" "gid=-1" "--FileSystemName=rclone" "--VolumePrefix=\server\s3{0uQwe} mybucketomount-heap"]
2022/08/10 00:32:38 INFO : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2022/08/10 00:32:38 DEBUG : S3 bucket mybucketomount: Init:
2022/08/10 00:32:38 DEBUG : S3 bucket mybucketomount: >Init:
2022/08/10 00:32:38 DEBUG : /: Statfs:
2022/08/10 00:32:38 DEBUG : /: >Statfs: stat={Bsize:4096 Frsize:4096 Blocks:274877906944 Bfree:274877906944 Bavail:274877906944 Files:1000000000 Ffree:1000000000 Favail:0 Fsid:0 Flag:0 Namemax:255}, errc=0
2022/08/10 00:32:38 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2022/08/10 00:32:38 DEBUG : /: >Getattr: errc=0
2022/08/10 00:32:38 DEBUG : /: Readlink:
2022/08/10 00:32:38 DEBUG : /: >Readlink: linkPath="", errc=-40
2022/08/10 00:32:38 DEBUG : /: Getxattr: name="non-existant-a11ec902d22f4ec49003af15282d3b00"
2022/08/10 00:32:38 DEBUG : /: >Getxattr: errc=-40, value=""
2022/08/10 00:32:38 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2022/08/10 00:32:38 DEBUG : /: >Getattr: errc=0
2022/08/10 00:32:38 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2022/08/10 00:32:38 DEBUG : /: >Getattr: errc=0
2022/08/10 00:32:38 DEBUG : /: Opendir:
2022/08/10 00:32:38 DEBUG : /: OpenFile: flags=O_RDONLY, perm=-rwxrwxrwx
2022/08/10 00:32:38 DEBUG : /: >OpenFile: fd=/ (r), err=
2022/08/10 00:32:38 DEBUG : /: >Opendir: errc=0, fh=0x0
2022/08/10 00:32:38 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2022/08/10 00:32:38 DEBUG : /: >Getattr: errc=0
2022/08/10 00:32:38 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2022/08/10 00:32:38 DEBUG : /: >Getattr: errc=0
2022/08/10 00:32:38 DEBUG : /: Opendir:
2022/08/10 00:32:38 DEBUG : /: OpenFile: flags=O_RDONLY, perm=-rwxrwxrwx
2022/08/10 00:32:38 DEBUG : /: >OpenFile: fd=/ (r), err=
2022/08/10 00:32:38 DEBUG : /: >Opendir: errc=0, fh=0x1
2022/08/10 00:32:38 DEBUG : /: Releasedir: fh=0x1
2022/08/10 00:32:38 DEBUG : /: >Releasedir: errc=0
2022/08/10 00:32:38 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2022/08/10 00:32:38 DEBUG : /: >Getattr: errc=0
2022/08/10 00:32:38 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2022/08/10 00:32:38 DEBUG : /: >Getattr: errc=0
2022/08/10 00:32:38 DEBUG : /: Opendir:
2022/08/10 00:32:38 DEBUG : /: OpenFile: flags=O_RDONLY, perm=-rwxrwxrwx
2022/08/10 00:32:38 DEBUG : /: >OpenFile: fd=/ (r), err=
2022/08/10 00:32:38 DEBUG : /: >Opendir: errc=0, fh=0x1
2022/08/10 00:32:38 DEBUG : /: Releasedir: fh=0x0
2022/08/10 00:32:38 DEBUG : /: >Releasedir: errc=0
2022/08/10 00:32:38 DEBUG : /: Releasedir: fh=0x1
2022/08/10 00:32:38 DEBUG : /: >Releasedir: errc=0
2022/08/10 00:32:39 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2022/08/10 00:32:39 DEBUG : /: >Getattr: errc=0
2022/08/10 00:32:39 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2022/08/10 00:32:39 DEBUG : /: >Getattr: errc=0
2022/08/10 00:32:39 DEBUG : /: Opendir:
2022/08/10 00:32:39 DEBUG : /: OpenFile: flags=O_RDONLY, perm=-rwxrwxrwx
2022/08/10 00:32:39 DEBUG : /: >OpenFile: fd=/ (r), err=
2022/08/10 00:32:39 DEBUG : /: >Opendir: errc=0, fh=0x0
2022/08/10 00:32:39 DEBUG : /: Statfs:
2022/08/10 00:32:39 DEBUG : /: >Statfs: stat={Bsize:4096 Frsize:4096 Blocks:274877906944 Bfree:274877906944 Bavail:274877906944 Files:1000000000 Ffree:1000000000 Favail:0 Fsid:0 Flag:0 Namemax:255}, errc=0
2022/08/10 00:32:39 DEBUG : /: Releasedir: fh=0x0
2022/08/10 00:32:39 DEBUG : /: >Releasedir: errc=0
2022/08/10 00:32:39 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2022/08/10 00:32:39 DEBUG : /: >Getattr: errc=0
2022/08/10 00:32:39 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2022/08/10 00:32:39 DEBUG : /: >Getattr: errc=0
2022/08/10 00:32:39 DEBUG : /: Opendir:
2022/08/10 00:32:39 DEBUG : /: OpenFile: flags=O_RDONLY, perm=-rwxrwxrwx
2022/08/10 00:32:39 DEBUG : /: >OpenFile: fd=/ (r), err=
2022/08/10 00:32:39 DEBUG : /: >Opendir: errc=0, fh=0x0
2022/08/10 00:32:39 DEBUG : /: Releasedir: fh=0x0
2022/08/10 00:32:39 DEBUG : /: >Releasedir: errc=0
2022/08/10 00:32:39 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2022/08/10 00:32:39 DEBUG : /: >Getattr: errc=0
2022/08/10 00:32:39 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2022/08/10 00:32:39 DEBUG : /: >Getattr: errc=0
2022/08/10 00:32:39 DEBUG : /: Opendir:
2022/08/10 00:32:39 DEBUG : /: OpenFile: flags=O_RDONLY, perm=-rwxrwxrwx
2022/08/10 00:32:39 DEBUG : /: >OpenFile: fd=/ (r), err=
2022/08/10 00:32:39 DEBUG : /: >Opendir: errc=0, fh=0x0
2022/08/10 00:32:39 DEBUG : /: Releasedir: fh=0x0
2022/08/10 00:32:39 DEBUG : /: >Releasedir: errc=0
2022/08/10 00:32:42 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2022/08/10 00:32:42 DEBUG : /: >Getattr: errc=0
2022/08/10 00:32:43 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2022/08/10 00:32:43 DEBUG : /: >Getattr: errc=0
2022/08/10 00:32:44 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2022/08/10 00:32:44 DEBUG : /: >Getattr: errc=0
2022/08/10 00:32:45 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2022/08/10 00:32:45 DEBUG : /: >Getattr: errc=0
2022/08/10 00:32:46 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2022/08/10 00:32:46 DEBUG : /: >Getattr: errc=0
2022/08/10 00:32:51 DEBUG : /Desktop.ini: Getattr: fh=0xFFFFFFFFFFFFFFFF
2022/08/10 00:32:54 DEBUG : /Desktop.ini: >Getattr: errc=-2
2022/08/10 00:32:54 DEBUG : /Desktop.ini: Getattr: fh=0xFFFFFFFFFFFFFFFF
2022/08/10 00:32:54 DEBUG : /Desktop.ini: >Getattr: errc=-2
2022/08/10 00:32:54 DEBUG : /AutoRun.inf: Getattr: fh=0xFFFFFFFFFFFFFFFF
2022/08/10 00:32:54 DEBUG : /AutoRun.inf: >Getattr: errc=-2
2022/08/10 00:32:54 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2022/08/10 00:32:54 DEBUG : /: >Getattr: errc=0
2022/08/10 00:32:54 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2022/08/10 00:32:54 DEBUG : /: >Getattr: errc=0
2022/08/10 00:32:54 DEBUG : /: Opendir:
2022/08/10 00:32:54 DEBUG : /: OpenFile: flags=O_RDONLY, perm=-rwxrwxrwx
2022/08/10 00:32:54 DEBUG : /: >OpenFile: fd=/ (r), err=
2022/08/10 00:32:54 DEBUG : /: >Opendir: errc=0, fh=0x0
2022/08/10 00:32:54 DEBUG : /: Releasedir: fh=0x0
2022/08/10 00:32:54 DEBUG : /: >Releasedir: errc=0
2022/08/10 00:32:54 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2022/08/10 00:32:54 DEBUG : /: >Getattr: errc=0
2022/08/10 00:32:54 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2022/08/10 00:32:54 DEBUG : /: >Getattr: errc=0
2022/08/10 00:32:54 DEBUG : /: Opendir:
2022/08/10 00:32:54 DEBUG : /: OpenFile: flags=O_RDONLY, perm=-rwxrwxrwx
2022/08/10 00:32:54 DEBUG : /: >OpenFile: fd=/ (r), err=
2022/08/10 00:32:54 DEBUG : /: >Opendir: errc=0, fh=0x0
2022/08/10 00:32:54 DEBUG : /: Releasedir: fh=0x0
2022/08/10 00:32:54 DEBUG : /: >Releasedir: errc=0
2022/08/10 00:32:54 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2022/08/10 00:32:54 DEBUG : /: >Getattr: errc=0
2022/08/10 00:32:54 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2022/08/10 00:32:54 DEBUG : /: >Getattr: errc=0
2022/08/10 00:32:54 DEBUG : /: Opendir:
2022/08/10 00:32:54 DEBUG : /: OpenFile: flags=O_RDONLY, perm=-rwxrwxrwx
2022/08/10 00:32:54 DEBUG : /: >OpenFile: fd=/ (r), err=
2022/08/10 00:32:54 DEBUG : /: >Opendir: errc=0, fh=0x0
2022/08/10 00:32:54 DEBUG : /: Releasedir: fh=0x0
2022/08/10 00:32:54 DEBUG : /: >Releasedir: errc=0
2022/08/10 00:32:54 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2022/08/10 00:32:54 DEBUG : /: >Getattr: errc=0
2022/08/10 00:32:54 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2022/08/10 00:32:54 DEBUG : /: >Getattr: errc=0
2022/08/10 00:32:54 DEBUG : /: Opendir:
2022/08/10 00:32:54 DEBUG : /: OpenFile: flags=O_RDONLY, perm=-rwxrwxrwx
2022/08/10 00:32:54 DEBUG : /: >OpenFile: fd=/ (r), err=
2022/08/10 00:32:54 DEBUG : /: >Opendir: errc=0, fh=0x0
2022/08/10 00:32:54 DEBUG : /: Releasedir: fh=0x0
2022/08/10 00:32:54 DEBUG : /: >Releasedir: errc=0
2022/08/10 00:32:54 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2022/08/10 00:32:54 DEBUG : /: >Getattr: errc=0
2022/08/10 00:32:54 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2022/08/10 00:32:54 DEBUG : /: >Getattr: errc=0
2022/08/10 00:32:54 DEBUG : /: Opendir:
2022/08/10 00:32:54 DEBUG : /: OpenFile: flags=O_RDONLY, perm=-rwxrwxrwx
2022/08/10 00:32:54 DEBUG : /: >OpenFile: fd=/ (r), err=
2022/08/10 00:32:54 DEBUG : /: >Opendir: errc=0, fh=0x0
2022/08/10 00:32:54 DEBUG : /: Releasedir: fh=0x0
2022/08/10 00:32:54 DEBUG : /: >Releasedir: errc=0
2022/08/10 00:33:38 INFO : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)

The output above is generated after I ran the rclone mount command, but not accessed the application. If you want to see the full log, they can be found here - this includes the logs captured when I try to launch the application (which reads from the S drive - where the S3 bucket is mounted to).

Many thanks,
James

And just to add, I've repeated the exact same test (within 10 mins of each other) - just so that I am sure that the cache stays within the --dir-cache-time 1h --vfs-cache-max-age 999h boundary. Having compared the logs, they were very similar, especially when it comes to the following message

INFO : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)

I'm still puzzled as to why it would be saying (was 0) upon mounting the S3 bucket despite the cache being there.

And if you were to look at the full log (as per the previous post), line 26826 (this would've been generated when the application is still being launched) says

2022/08/10 00:34:38 INFO : vfs cache: cleaned: objects 36 (was 36) in use 15, to upload 0, uploading 0, total size 94.971Mi (was 94.971Mi)

Which suggests that it found 36 objects, but the 36 objects were being cleared down?

Cheers,
James

Me too! If it says 0 then rclone didn't find the cache :frowning_face:

That log says it found 36 objects and cleaned none of them, since both current=36 and old=36