Rclone mount cache and fragmentation

the current way the rclone mount cache is implemented suffers heavily from fragmentation. By writing out files in a sparse manner, one does a good job at not wasting space (or bandwidth) on unused portions of a file. However, it means the OS cannot write the files in an optimal manner. Now, this becomes a bigger problem when one is caching large files and one also deletes things from the cache. At least in linux, deleting heavily fragmented files can take a significant amount of time. My observation is, that when this happens, the whole rclone mount freezes (blocked in unlink).

I honestly think a wiser course of action would be to store the individual cached blocks as independent files with a suffix that defines which block they are. if one is using 16MB cache blocks, then for instance, file-0 would be the first 16MB, file-1 would be the 2nd, so on and so forth. (can throw in an extra magic string to try and avoid conflict (or perhaps even definable on the command line). One would also need to store metadata in the cache dir to make clear what the block size is (while it could perhaps be inferred, if all one has is the last block, it couldn't be) and one does not want to use the mechanism if the user could change the block size on the command line, as the offsets will no longer be correct.

this provides 2 advantages

  1. fragmentation will still exist (the different blocks), but should no longer be framgneted within a block, making deletions block for significantly less time

  2. the ability to evict unused blocks without evicting the whole file.

on the flip side, it will possibly cause significantly more open/closes, as one can't just keep a single file open, for every read() the fs will have to determine the offset, open (or create) the proper cached block file and read (or write) to it as appropriate.

It might also make the write cache harder to implement, as its no longer a single file that has to be synced to the remote.

I'm wondering if most people use it for read caching or write caching or both?

Read caching as the primary factor was streaming.

I've never noticed an issue ever reading as these files are very transient don't last long enough to warrant a defrag.

The design choice of going with sparse files over chunks had much discussion and landed on sparse files so not sure that is going to change.

its not about needing a defrag, and it does impact streaming. If I have a 50GB file that is heavily fragemented and has to be deleted before another file can be cached, and that 50GB file will block for 60s-120s (yes, I've seen this) that impacts streaming. I could store my cache on an ssd and things would improve, I'd also wear out my SSD.

I've seen / witnessed zero impact as I use a SSD for my cache as all I do is stream.

If you have slower / spinning disk, sure.

I use SSD and lifetime of these things is far longer than you'd imagine. I've used the same SSD in my Linux box for around ~4 years as my root drive that has plex which constantly writes transcodes/small files / etc.

I did a 1TB back in July 2020 for my cache disk for Plex and haven't looked back as it's a nice and easy sub $100 investment.

So in my use case of a SSD, there is no impact on streaming and in 10 years, I'll replace it :slight_smile:

1 Like

that drive you listed (250GB) has a rated write lifespan of 75TB. i.e. you can rewrite the whole drive 300 times before its out of warranty. If you write 100GB a day to it, it will last 2 years. If you're writing 20GB a day only, it will last 10 years yes, but then spinning discs would probably be a sufficient cache as well, as if I was only writing 20GB to the cache, a 1-2TB spinning disc cache drive wouldn't be deleting things often either.

with that said, I'll be honest, I don't really get your attitude to many posts, its like the apple reaction of "you're holding it wrong".

Here are some stats on my 1TB drive:

root@gemini:~# smartctl /dev/sda --all | grep "Sector Size"
Sector Size:      512 bytes logical/physical
root@gemini:~#  smartctl /dev/sda --all | grep Total_LBAs_Written
241 Total_LBAs_Written      0x0032   099   099   000    Old_age   Always       -       152272722450

So that's 70TB if my rough math is correct which gives me ~4.2 years if my use case continues.

I'm legit just having a discussion and asking question / presenting a different viewpoint. If you have a specific word that I am using that implies attitude or any way I am personally attacking or demeaning anything you are saying, please be specific and I can:

  • remove it
  • apologize

Be asking questions, we all learn so I'm presenting my perception and viewpoint to get more information and help myself learn as well.

In this case, you've made me research my 1TB drive and calculate lifespan so I have a general idea of the wear I'm putting on it.

1 Like

my media server, jellyfin, running on windows server 2019 hyper-v edition, has a REFS filesystem, setup as raid5, with checksum encryption on read/writes, much like ZFS on linux.
3 slow 5200rpm, mechanical drives, no ssd.

can easily stream multiple 4k streams, never a problem with vfs cache full.
the mount is read-only, so i never write to it.

@spotter - fyi, check this out on read only files for the cache-mode full as I made an incorrect statement earlier:

ok. so I'm going to experiment with a tmpfs as well. can dedicate a few GB of ram to it.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.