Rclone Cache much slower than no cache download?

@Animosity022 mentioned that he uses a 32GB cache since he has that in his ram so I presume that if the cache is sitting on an HDD, it’s also cached in the RAM and would likely read from that first? Would moving the .cache/rclone to an ssd increase the performance you think @seuffert?

My local chunk drive is a SSD but for giggles, I turned down the overall size as I don’t care as much what I keep on disk to 8GB and I am testing setting it to /dev/shm

I added:

–cache-chunk-path /dev/shm \

and reduced my chunk size down to 10G since I only have 16G in /dev/shm.

I’m using SSD for chunks anyway so I noticed a very negligible difference in start up time.

With setting it to /dev/shm though would that mean it would use double the amount of the cache size set in your ram. I wonder if there might be a difference between having it set to an ssd vs hdd. Do you use 10gb of cache now instead of the previous 32gb?

I was more curious if it was faster, but it’s pretty small.

Yes, you’d use memory in /dev/shm as I have 8GB configured:

tmpfs 16G 8.0G 7.7G 52% /dev/shm

It might be beneficial if you have a slower non SSD drive to just keep stuff there as my goal is not long term local cache. My use case is that people just watch different stuff and I wouldn’t get much benefit from having it local as I have no internet monthly cap and a nice gigabit pipe.

I have 377gb of ram in this current machine so mounting a higher cache size wouldnt be a problem. I’m not sure how rclone writes the cache, maybe to disk first then uploads that to ram or maybe vice versa. @ncw, do you know if the cache is written to ram first and exclusively read from?

Start from the docs:

By default, cache will keep file data during streaming in RAM as well to provide it to readers as fast as possible.

This transient data is evicted as soon as it is read and the number of chunks stored doesn’t exceed the number of workers. However, depending on other settings like cache-chunk-size and cache-workers this footprint can increase if there are parallel streams too (multiple files being read at the same time).

If the hardware permits it, use this feature to provide an overall better performance during streaming but it can also be disabled if RAM is not available on the local machine.

I must’ve not seen that on the wiki. My local cache drive has full io loads sometimes so maybe if it depended on being able to write from ram to disk to continue caching then moving to a faster drive would help