I´m using rclone to farm chia on google drive and on dropbox. On google drive I have 0 issues because response time is quick but on dropbox response time is not quick enough and I need to know if changing my rclone mount flags would help at all. I´m using this setup right now:
rclone.exe mount dropbox: F: --cache-dir "C:\rclone-vfs-cache" --multi-thread-streams 1024 --multi-thread-cutoff 128M --network-mode --vfs-cache-mode full --vfs-cache-max-size 100G --vfs-cache-max-age 240000h --vfs-read-chunk-size-limit off --buffer-size 0K --vfs-read-chunk-size 64K --vfs-read-wait 0ms -v
I need cache not to expire and I have disk enough, thats why I have those --vfs-cache-max-size 100G and --vfs-cache-max-age 240000h.
--vfs-read-chunk-size 64K is because chia uses this amount for reading plots. I´m also farming on a remote harvester near dropbox servers to get less latency.
I need to speed this up a little bit, 30% faster would be enough. Is there any way or any flag I can modify/add/remove to get better response times (reading the chia plots faster)?
It would be very helpful as I have 200 plots on my dropbox and I sold my drives. Thanks.
there are various IPS, but mainly around 0-10 ms, its enough I think... I think config for rclone mount could get better... changing flags affects hugely response times (based on my experience). Thats why I ask here...
a plot is approx 100GB and the chia app works with one plot at a time?
the chia app will have to read from the entire plot, every byte of it?
--- for a plot, once the app has requested the chunk from rclone, will the app need to access the chunk again?
can i assume the answer is no, as you use --vfs-cache-max-age 240000h
when my emby media server streams from cloud,
use --read-only
do not use any vfs file cache flags, in effect --vfs-cache-mode=off
use --buffer
in that way, reduces rclone's overhead dealing with local file system.
and works great on resource limited devices such as a raspberry pi zero.
2022-02-16T08:38:51.103 chia.plotting.check_plots : WARNING Looking up qualities took: 12348 ms. This should be below 5 seconds to minimize risk of losing rewards.
2022-02-16T08:39:33.655 chia.plotting.check_plots : WARNING Finding proof took: 42548 ms. This should be below 15 seconds to minimize risk of losing rewards.
I think VFS cache is essential, there are some parts of the plot file (pointers) that in every challenge (each 9 seconds) needs to be read. So then vfs there helps a lot... But anyway thank you very much for your awnser, hope we can close this topic with a solution!
How many reads need to be done in a challenge? Number of reads that have to go to the dropbox servers will be the limiting factor. There is a latency of let's say 1s reading a block of data from dropbox. Compared to a hard disk at 10ms that is a lot of latency.
I think you are going to have to be very scientific and tweak the parameters one at a time to see if you get an improvement or not.
8 reads for filter read and 64 reads if filter is passed (probability 1/512). Each seek its 64K, thats why I had --vfs-read-chunk-size 64K, I dont know if its good.