Share your rclone mount command here

I want to know what mount command other people use that are trouble free.

I use
/usr/sbin/rclone mount ijm-cache: /mnt/user/Cloud/ijm/ijmrclone -v --max-read-ahead 200M

where the mount is rclone cache. max-read-ahead to improve buffering(almost no buffering when playback, unless the bitrate saturated)

What command did you use and why?

Quote from @ncw :

From my investigations --max-read-ahead is pretty useless - the kernel caps it at 128k which is the default valueā€¦

Maybe I should mention the cloud service that I use. Im using Google Drive. Previously, rclone performance was not practical to watch movie. stuttering and rebuffering all the time. I cant even watch a movie properly for 1 minutes. Its that bad. Then when I tried adding max-read-ahead 200M, watching movie is now practical.

Im thinking, are there any other command that might be useful? Not that I have problems though.

When you are having problems with buffering, try to increase --buffer-size (default 16M). From my experience --max-read-ahead does not solve those problems, as the 128K default value matches most kernels maximum value.

My encrypted Google Drive mount is using the following arguments:
rclone mount -v --buffer-size 32M --read-only --allow-other --gid 33 --umask 0027 --dir-cache-time 300h --poll-interval 5m gdrive-crypt: /mnt/gdrive

--read-only --allow-other --gid 33 --umask 0027 are just for FUSE access control.
--buffer-size 32M doubles the receive buffer to reduce buffering issues.
--dir-cache-time 300h --poll-interval 5m instead of using the new cache remote iā€™m just ā€˜cachingā€™ metadata for a long time.

1 Like

After a while, cache mount wasnt performing very well.

so I use this
/usr/sbin/rclone mount ijm: /mnt/user/Cloud/ijm/ijmrclone -v --max-read-ahead 200M

and i still experience rebuffering (for a 8gb movie with 50Mbps download speed network).

So i use this
/usr/sbin/rclone mount ijm: /mnt/user/Cloud/ijm/ijmrclone -v --max-read-ahead 200M --buffer-size 32M --dir-cache-time 300h --poll-interval 5m

And i still experience unbearable rebuffering.

any other tips? Im using kodi.

And you are not having any bans? You are using Gdrive, right? How big is your library?
To be honest this sound good, I will give it a try and see how it works.

Thanks for sharing this!

@Syaiful_Nizam_Yahya
On what kind of device is rclone running?
I once tried to run it on a Raspberry Pi and had similar buffering issues because the CPU was not fast enough to handle TLS and crypt.

@neik
My Gdrive library is currently pretty large (>100TB). Normaly iā€™m not getting any bans from Google.
Recently i ran into bans, when trying to index (read a few blocks from the beginning) a large amount of files.
Iā€™m currently working with ncw on https://github.com/ncw/rclone/issues/1825 to fix these kind of bans.

Ok, then Iā€™m pretty far away from the size of your library.
Given that I do not index any files (besides the new stuff I upload) I guess I should be fine banwise.

Once again thanks for your contribution! :slight_smile:

i use kodi, not plex. afaik, kodi does not have ban issue when scraping movies. as for library, about 20TB.

I still have experience rebuffering when use this
/usr/sbin/rclone mount ijm: /mnt/user/Cloud/ijm/ijmrclone -v --max-read-ahead 200M --buffer-size 32M --dir-cache-time 300h --poll-interval 5m

My unraid does the decrypt, not my media center. i guess this rules out lack of cpu resources.

any tips?

This is my final command that I use now. I use Gdrive and all mount and decryption are handled by my Khadas Vim. So far no problem.

sleep 10s &&

screen -dmS rclonemount /storage/downloads/Tool/Rclone/rclone-v1.42-linux-arm64/rclone mount ijm: /storage/downloads/mount/ijm/ -vvv --max-read-ahead 256M --buffer-size 64M --dir-cache-time 24h --poll-interval 5m --allow-other --vfs-cache-mode writes

sleep 10s &&

echo myPassword | ENCFS6_CONFIG=ā€™/storage/downloads/Tool/Rclone/.encfs6.xmlā€™ encfs
-S -o rw -o allow_other -o uid=99 -o gid=100
/storage/downloads/mount/ijm/encfs
/storage/downloads/mount/encfs

having same issues with buffering - should i be using a 64M buffer or 1G buffer? I have potential ram but unclear how i can get 4K remuxes to stream better.

mount Remote: G: --config "C:\[...]\rclone.conf" --allow-non-empty --allow-other --read-only --dir-cache-time 48h --buffer-size 512M --vfs-read-chunk-size 512M --vfs-read-chunk-size-limit 2G

I have tried vfs-read-chunk-size as 32M and 512M and both cause buffering ~5mins into a 4K remux (70mb/s+ bitrate).

@B4dM4n

Looking at your mount, do you do everything on 1 box?

Iā€™m using VFS and itā€™s annoying when you have a folder of 2000 movies and it completely ejects the VFS cache when a new movie is added to that folder. This causes the VFS to empty and requires a ā€œrebuildā€ so to speak. This slows everything down (Radarr/Sonarr/etc).

I have multiple boxes and Iā€™m trying to find a way to keep the directory listing:

  1. Always fresh
  2. Speedy

To mimic as much as possible a ā€œliveā€ file system.

Break up your folders.

If you keep a large folder and constantly dump into, itā€™s gotta rebuild the whole thing.

Keep a directory of ā€˜doneā€™ stuff that isnā€™t going to expire except on the timeout.

My mount command is only used for reading. Uploading is done using a move script to a separate upload folder.

Sorting is done on a separate machine to ensure everything is mapped to the correct IMDb ID. Since I canā€™t use radarr or sonarr for automatic downloads, I wrote my own scripts to sort all movieā€™s using my preferred scheme. After my sorting script is finished, I use rclone rc vfs/refresh recursive=true dir=movies to refresh the cache with --fast-list support. This is substantially faster for large folder trees.

You also made me think if it really needed to purge the whole tree cache if only one item is changed. It should be enough to only invalidate the changed entry tree and force a refresh on the parent dir, but keep the cache of all other entries in the parent alive. This would reduce the required API calls substantially.

@Animosity022 @B4dM4n

I think @B4dM4n is on to something. There definitely should be a way to evict only the relevant data from cache, not the entire tree.

@Animosity022 Iā€™m not entirely sure for movies that breaking things up would really help. With TV shows itā€™s possible (have an ā€œendedā€ folder with all shows that are done).

For movies, itā€™s constantly being updated and every time a new movie is added the entire movie folder is evicted. Sure I could drop things into A-D, E-M, etc sub folders but thatā€™s complicated.

It seems that even through VFS or Cache backend, rclone still hasnā€™t been able to reproduce the speedy local-like read operations that somethinig like Plexdrive solved over a year ago.

Your description sounds over complicated. Iā€™d just make a folder or two and drop 1000 items in them. Doesnā€™t really matter what you call it as plex handles it fine.

My mount command seems to work well for me, but open to any suggestions for improvements!

My setup is rclone mounted on unraid server with 200/200 connection with sonarr, radarr etc running in dockers and unionfs mount of gdrive files and local files that havenā€™t been uploaded via separate rclone move scheduled job

rclone mount --allow-other --dir-cache-time 72h --buffer-size 1G --vfs-read-chunk-size 64M --vfs-read-chunk-size-limit off --log-level INFO gdrive_media_vfs: /mnt/user/mount_rclone/google_vfs --stats 1m

unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/rclone_upload/google_vfs=RW:/mnt/user/mount_rclone/google_vfs=RO /mnt/user/mount_unionfs/google_vfs

rclone move /mnt/cache/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 5 --fast-list --transfers 3 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 9500k --tpslimit 6

Still a work in progress but hereā€™s what it looks like to date
/usr/bin/rclone mount
ā€“log-file ${LOGS}/rclone.log
ā€“log-level INFO
ā€“umask 000
ā€“allow-non-empty
ā€“allow-other
ā€“fast-list
ā€“transfers 24
ā€“dir-cache-time 72h
ā€“drive-chunk-size=32M
ā€“fuse-flag direct_io
ā€“cache-chunk-total-size 8G
ā€“cache-chunk-size 8M
ā€“cache-chunk-no-memory
ā€“cache-workers=24
ā€“buffer-size=32M
ā€“vfs-cache-mode minimal
ā€“vfs-read-chunk-size 32M
ā€“vfs-cache-max-age 1h
ā€“vfs-read-chunk-size-limit off
ā€“cache-tmp-upload-path=${UPLOADS}
ā€“config ${RCLONEHOME}/rclone.conf
gcrypt: ${MOUNTTO}