Maybe I should mention the cloud service that I use. Im using Google Drive. Previously, rclone performance was not practical to watch movie. stuttering and rebuffering all the time. I cant even watch a movie properly for 1 minutes. Its that bad. Then when I tried adding max-read-ahead 200M, watching movie is now practical.
Im thinking, are there any other command that might be useful? Not that I have problems though.
When you are having problems with buffering, try to increase --buffer-size (default 16M). From my experience --max-read-ahead does not solve those problems, as the 128K default value matches most kernels maximum value.
My encrypted Google Drive mount is using the following arguments: rclone mount -v --buffer-size 32M --read-only --allow-other --gid 33 --umask 0027 --dir-cache-time 300h --poll-interval 5m gdrive-crypt: /mnt/gdrive
--read-only --allow-other --gid 33 --umask 0027 are just for FUSE access control. --buffer-size 32M doubles the receive buffer to reduce buffering issues. --dir-cache-time 300h --poll-interval 5m instead of using the new cache remote iām just ācachingā metadata for a long time.
After a while, cache mount wasnt performing very well.
so I use this
/usr/sbin/rclone mount ijm: /mnt/user/Cloud/ijm/ijmrclone -v --max-read-ahead 200M
and i still experience rebuffering (for a 8gb movie with 50Mbps download speed network).
So i use this
/usr/sbin/rclone mount ijm: /mnt/user/Cloud/ijm/ijmrclone -v --max-read-ahead 200M --buffer-size 32M --dir-cache-time 300h --poll-interval 5m
And you are not having any bans? You are using Gdrive, right? How big is your library?
To be honest this sound good, I will give it a try and see how it works.
@Syaiful_Nizam_Yahya
On what kind of device is rclone running?
I once tried to run it on a Raspberry Pi and had similar buffering issues because the CPU was not fast enough to handle TLS and crypt.
@neik
My Gdrive library is currently pretty large (>100TB). Normaly iām not getting any bans from Google.
Recently i ran into bans, when trying to index (read a few blocks from the beginning) a large amount of files.
Iām currently working with ncw on https://github.com/ncw/rclone/issues/1825 to fix these kind of bans.
Ok, then Iām pretty far away from the size of your library.
Given that I do not index any files (besides the new stuff I upload) I guess I should be fine banwise.
i use kodi, not plex. afaik, kodi does not have ban issue when scraping movies. as for library, about 20TB.
I still have experience rebuffering when use this
/usr/sbin/rclone mount ijm: /mnt/user/Cloud/ijm/ijmrclone -v --max-read-ahead 200M --buffer-size 32M --dir-cache-time 300h --poll-interval 5m
My unraid does the decrypt, not my media center. i guess this rules out lack of cpu resources.
having same issues with buffering - should i be using a 64M buffer or 1G buffer? I have potential ram but unclear how i can get 4K remuxes to stream better.
Looking at your mount, do you do everything on 1 box?
Iām using VFS and itās annoying when you have a folder of 2000 movies and it completely ejects the VFS cache when a new movie is added to that folder. This causes the VFS to empty and requires a ārebuildā so to speak. This slows everything down (Radarr/Sonarr/etc).
I have multiple boxes and Iām trying to find a way to keep the directory listing:
Always fresh
Speedy
To mimic as much as possible a āliveā file system.
My mount command is only used for reading. Uploading is done using a move script to a separate upload folder.
Sorting is done on a separate machine to ensure everything is mapped to the correct IMDb ID. Since I canāt use radarr or sonarr for automatic downloads, I wrote my own scripts to sort all movieās using my preferred scheme. After my sorting script is finished, I use rclone rc vfs/refresh recursive=true dir=movies to refresh the cache with --fast-list support. This is substantially faster for large folder trees.
You also made me think if it really needed to purge the whole tree cache if only one item is changed. It should be enough to only invalidate the changed entry tree and force a refresh on the parent dir, but keep the cache of all other entries in the parent alive. This would reduce the required API calls substantially.
I think @B4dM4n is on to something. There definitely should be a way to evict only the relevant data from cache, not the entire tree.
@Animosity022 Iām not entirely sure for movies that breaking things up would really help. With TV shows itās possible (have an āendedā folder with all shows that are done).
For movies, itās constantly being updated and every time a new movie is added the entire movie folder is evicted. Sure I could drop things into A-D, E-M, etc sub folders but thatās complicated.
It seems that even through VFS or Cache backend, rclone still hasnāt been able to reproduce the speedy local-like read operations that somethinig like Plexdrive solved over a year ago.
Your description sounds over complicated. Iād just make a folder or two and drop 1000 items in them. Doesnāt really matter what you call it as plex handles it fine.
My mount command seems to work well for me, but open to any suggestions for improvements!
My setup is rclone mounted on unraid server with 200/200 connection with sonarr, radarr etc running in dockers and unionfs mount of gdrive files and local files that havenāt been uploaded via separate rclone move scheduled job
Still a work in progress but hereās what it looks like to date
/usr/bin/rclone mount
ālog-file ${LOGS}/rclone.log
ālog-level INFO
āumask 000
āallow-non-empty
āallow-other
āfast-list
ātransfers 24
ādir-cache-time 72h
ādrive-chunk-size=32M
āfuse-flag direct_io
ācache-chunk-total-size 8G
ācache-chunk-size 8M
ācache-chunk-no-memory
ācache-workers=24
ābuffer-size=32M
āvfs-cache-mode minimal
āvfs-read-chunk-size 32M
āvfs-cache-max-age 1h
āvfs-read-chunk-size-limit off
ācache-tmp-upload-path=${UPLOADS}
āconfig ${RCLONEHOME}/rclone.conf
gcrypt: ${MOUNTTO}