In my testing, I can get a 60GB 4K movie and any size movie for that matter, to start in roughly 5-6 seconds.
I made some changes to my config when I’m using the cache is that I just use memory for everything and I purposely don’t put anything to disk:
ExecStart=/usr/bin/rclone mount gmedia: /GD \
--dir-cache-time 72h \
--cache-chunk-path /dev/shm \
--cache-chunk-size 10M \
--cache-info-age 72h \
--cache-workers 6 \
--buffer-size 0M \
--umask 002 \
I have 32GB of memory on my box so my /dev/shm is 16GB and I cap it at 10GB for max size. Depending on what your disk is and how fast/slow it is, that may make the start times slower.
For #2, the uploads are only going to use 1 worker so they are slow pretty much with the plex integration. You can either just wait for it to be done, remove the plex integration and just use a higher default cache workers or you can use something like mergerfs and just keep it locally and rclone move at a later date using your own rclone command.
If you are going for quicker start time and don’t mind the mergerfs/unionfs/rclone move scenario, I found that the vfs-chunk-size starts in ~1-2 seconds for any movie for me.
So for that, I remove the cache all together and just mount the encrypted filesystem:
ExecStart=/usr/bin/rclone mount gcrypt: /GD \
--dir-cache-time 96h \
--vfs-cache-max-age 48h \
--vfs-read-chunk-size 10M \
--vfs-read-chunk-size-limit 100M \
--buffer-size 1G \
--umask 002 \
Downside with that command is each file can max out 1G so you could run out of memory and depending on your system, you may want to tweak the memory down. I’ve got a pretty good grasp on how my plex setup/sonarr/radarr setup works so that number doesn’t bother me.