Hello, I am aiming to mount Google Drive as a cache on Plex and Nextcloud.
I think tmp_wait_time is very good for the cache feature. The delayed upload function did not let me feel slow. But sometimes I can't upload when I drag multiple files on nextcloud. I wonder which part of my setup I should fix. Or, if you have recommended cache upload settings, please introduce me.
I think you quoted the wrong thing?
I assume you meant
But your point is very valid for this, and I agree that is too low for typical use-cases. Unless your bandwidth is painfully slow maybe...
I'm not even sure what 1M indicates. it's a duration, so assuming this works then it must be a month I guess? I don't know, I have never tried using that postfix for durations before, but that seems plausible enough I guess.
That is perfectly fine. I assume Ani just quoted the wrong line here.
No, it is associated with the download. What this effectively means is that the cache will be downloading (and caching it) in 1MB chunks. So as Ani says, if you download 1GB it means the cache will have to ask Google for 1MB 1000 times. This is very inefficient both in terms of bandwidth utilization and API.
I would say the ideal chunk-size is about as much data as you can download in maybe 1-3 seconds. For example, on my 162Mbit (around 18MB/sec in practice) I might use something like 20,30 or 40MB.
A higher chunk size is good because it is more efficient (bandwidth utilization and API calls)
A lower chunk size is good because caching is more granular, and also it will reduce the minimum time required to be able to start reading a file (because the first chunk must download fully before it can be delivered).
If you have absolutely no idea what I'm talking about, just set 10M
That is a good middle-ground for most people.
Since you mentioned upload though...
This is a similar setting that relates to upload. Larger is more efficient (effectively faster), but the downside is that each active transfer can use that much memory. Since you are using the default 4 transfers in this case - that means rclone may use up to 512M x 4 = 2GB just on upload buffer. This may be a bit excessive... but it's not "wrong" as long as you have that much memory to spare...
As a general guide: 64M - IMO best compromise between speed and memory 128M - close to best performance 256M - highest value that gives any significant improvement in my testing, but not much faster than 128M anything above 256M- overkill. Probably not worth it or noticeable difference. Don't you have anything better to use your RAM for?
How this impacts things does vary somewhat on your max bandwidth though. Someone with gigabit connection would probably get more out of using 256/512M than I do...
One last point - I'd just leave this at default when using the cache-backend - because that is caching that data for you already (the info-age flag). No point in trying to cache the same data twice in 2 places. If the VFS layer needs to update directory data it will just ask the cache-backend for the answer. Any number less than 1M here would be meaningless ( not that I am suggesting you should make it higher... rather the opposite).
Probably won't blow anything up to leave it like this, but you are just creating another point of potential failure. Let the cache-backend handle it - as long as you use it that is.