Hi just wondering if this exists today via some setting that’s not obvious to me in functionality?
thanks
Hi just wondering if this exists today via some setting that’s not obvious to me in functionality?
thanks
as a slight addendum - the way I read this if I have a rclone cache that’s pointing to a remote rclone serve this would enable multiple concurrent transfer via --cache-workers ?
thanks
Cache workers does use multiple processes to grab files, but I don’t find it to be faster than just using:
--vfs-read-chunk-size 64M --vfs-read-chunk-size-limit 2G
It’ll grab bigger chunks and even bigger if it’s reading the same file and I can pull my max line on a mount:
felix@gemini:/data$ rsync --progress /gmedia/Radarr_Movies/Unsane\ \(2018\)/Unsane\ \(2018\).mkv .
Unsane (2018).mkv
1,696,366,592 6% 90.30MB/s 0:04:22
wow so doing it via mount to a rclone serve certainly had the desired performance bump, thanks
addendum, it started to slow down to roughly the same performance as before , any other parameters on rclone serve?