It's been a while since I've retested things and was curious if some folks had other results.
I was doing some non cached testing with various read chunk sizes and curious on the results other folks got:
541 rclone mount dcrypt-tv: /home/felix/test -vvv --vfs-read-chunk-size 16M
542 rclone mount dcrypt-tv: /home/felix/test -vvv --vfs-read-chunk-size 256M
543 rclone mount dcrypt-tv: /home/felix/test -vvv --vfs-read-chunk-size 128M
544 rclone mount dcrypt-tv: /home/felix/test -vvv --vfs-read-chunk-size 1M
128/256 gave me consistently 2-3 mediainfo results in my timing and 1M/16M almost doubled the time to 5-6 seconds per file. The file does matter so as long as you test with the same file over and over, that would be fine as your results might not be exactly the same as mine.
Cache mode gives me faster results as expected if the file is already there.
1M - real 0m8.783s
8M - real 0m4.580s
16M - real 0m3.640s
32M - real 0m2.984s
64M - real 0m3.529s
128M - real 0m3.804s
256M - real 0m3.183s
32M generally seemed like a good suite spot for me with Dropbox so I was trying to see some other results.
I was just using time mediainfo for the same test file and umounting after each test and retesting a few times to see what a general average was.
I'm just running a mount and then cd'ing into a directory and timing the mediainfo on a media item.
So I basically have one terminal for a mount command and another I paste in something like:
cd /home/felix/test/TV
cd "TESTSHOW"
time mediainfo 'TESTFILE.mkv'
cd
fusermount -uz /home/felix/test
and just repeat by changing the mount command to my settings. I try to run through it a few times on each setting to get an idea of a baseline (not perfect) but a good general idea of the performance.
I wouldn't think so as just trying to get an idea of other folks doing a similar test. I'd pick just a normal file in your library and test using that and see how the results match up with different tweaks.
Plex generally does a mediainfo/ffprobe when it analyzes a file and I was just seeing if I'd make any other tuning changes on my mounts to help that process.
1M means much less 'wasted' download but it's much slower. The default of 128M still seems good but you get a little bit of waste at the end as you tend to read more and the file closes out. More problematic in the Google area as they have some algorithm it seems on number of opens/how much data is read per file and some aggregation of all of that which without any formal documentation is just awful to figure out as the results people post are all over the board.
When I redid my library, I butchered some stuff and had to redo a lot and without a download quota on Dropbox, they were easy to fix and re-analyze and get things right at the end. I can't imagine how many quota days I would have blown on Google, but that's a different problem all together
did some quick testing, for 16M, 256M and 1M, in that order, tested two runs,
i made sure to kill the mount each time. rclone mount wasabi01:zork /home/user01/rclone/mountpoints/zork --read-only --allow-other -v --vfs-read-chunk-size=xxx
here is the summary of the two runs, orderd by chunk size
--vfs-read-chunk-size=1M
real 0m1.891s
real 0m2.161s
--vfs-read-chunk-size=16M
real 0m1.900s
real 0m1.665s
--vfs-read-chunk-size=256M
real 0m2.656s
real 0m3.842s
There's a little noise in there as it does capture the cd into the directory and a poll, but overall, that's an example of how much data it pulls for a file. That's 4.6GB file I'm checking and while I could only snag a quick screenshot, it download a healthy bit of data.
In my experience (I use MediaInfo embedded into Windows Explorer all day long), it takes significantly longer if a file has a lot of subtitles. This is usually the case with shows on Netflix and Amazon.
Yep, probably getting a little off topic as I was curious to see different chunk sizes and their influence on a 'normal' type media file that resides in one's library.
S3 is a ton faster as I expected, albeit the smaller calls seem better. It's a way more expensive service compared to Google/Dropbox so I'd imagine it should be faster.