Oh I forgot you guys needed cache for the upload ability Took me a minute to think about it.
I was using the direct write to the mount without the cache. No issues here.
Thanks, for that. Playing around with these options helped me a lot.
Now, I did some tests seeding torrents both with the vfs-read-chunk-size and with a cache. In both cases I managed to resolve the huge downstream traffic problem I mentioned earlier, but had no further success (torrent upload speed was very low)
I may try again, once I have some more free time, and find some ideal torrents to test on (possibly some linux distros or something).
The main takeaway from my tests is that it’s not just an issue of optimizing rclone settings. Identifying a torrent client that minimizes disk access is also a big part of solving this problem.
I use Transmission, and observed, during my tests, that it does a lot of “unneeded” disk access (I also tested with transmission’s prefetch disabled), especially when starting/stopping torrents.
Maybe, other clients will behave better in this use case.
I’m trying to use it for Plex library, but when I search for new files, it starts again to load all the files, is there any solution?
I have tested the new vfs-read-chunk feature this afternoon.
I come from a GD > CACHE > CRYPT/MOUNT setup and moved it in an GD -> CRYPT/MOUNT setup.
what I have used …
--umask 0000 \ --default-permissions \ --allow-non-empty \ --allow-other \ --buffer-size 1G \ --dir-cache-time 12h \ --vfs-read-chunk-size 32M \ --vfs-read-chunk-size-limit 2G \
what can I say: works better than I thought. starting a stream ist much faster and 4k movies with high bitrates (>80 MB/s) playing very well. My lokal KODI buffer fills a bit faster than before too. For the moment I’m very happy with the non “cache remote” setup.
Thanks for this. Will do a quick test mount when I get home (or tomorrow). I still feel it’s a limitation with the onboard NIC of the Sony xbr than the mount.
Can you share your mount and what you are doing?
Did you want the buffer size that big based on the earlier posts?
Thank you beredim for sharing your experience.
Indeed. For this part, my experience is that rtorrent 0.9.6 consumes the most times of IO as compared to other clients. Also found the following issue lists for rtorrent:
However rtorrent is also the client that can seed the most torrents at the same time… The other clients, such as Deluge, has trouble seeding thousands of torrents stably.
To overcome this problem, I revised the rtorrent.rc of the rtorrent instances I use to seed from GD based on https://github.com/rakshasa/rtorrent/issues/443, as follows:
pieces.preload.type.set = 2
#pieces.preload.min_size.set = 1
#pieces.preload.min_rate.set = 1
It is said to require a bit less IO, but I am unstable to see obvious upload speed improvement (using GD).
A really simple command mount:
rclone mount GD: Y: --buffer-size 64M --vfs-read-chunk-size 16M --vfs-read-chunk-size-limit 64M
the extrange behaviour is that one folder doesn’t load the files again, and others it does
If you don’t add a higher dir-cache time, it only keeps the cache for 5 minutes:
–dir-cache-time duration Time to cache directory entries for. (default 5m0s)
I’ve been using:
/usr/bin/rclone mount gcrypt: /gmedia --allow-other --dir-cache-time 672h --vfs-cache-max-age 675h --vfs-read-chunk-size 32M --vfs-read-chunk-size-limit 1G --buffer-size 16M --syslog --umask 002 --rc --log-level INFO
But it’s very strange, there are other folders that doesn’t load all the files again.
I’m not sure exactly what you mean that it doesn’t load folders. Are you running it as the plex user if you aren’t using allow-other? Can you share an example?
When I scan a library with 300 files ±
It shows inmediatly:
But when I scan another library with 1500 files, it starts loading all the files again to check everything. And it takes some hours.
Both folders are untouched since the last scan.
Sounds like you are having permission issues. What are you running rclone as to do the mount?
The two folder are in the same parent folder so I dont know what are happening
What user are you running rclone as and what are the permissions on the folder?
Im running it on Windows, the user is admin and I run it as service.
The extrange thing is:
/folderA 400 files (libraryA)
/folderB 1500 files(libraeyB)
FolderA works fine, folderB is where it scans all the files again and again en each “update” of the library
It might be worth looking at the plex log files to see if they offer any clues. I would have a look at the “Plex Media Scanner.log” as well as the main “Plex Media Server.log”
I just check the log and it starts this way in every scan in the big folder…
May 28, 2018 22:04:37.903  DEBUG - Path matched, we’re reusing media item 1
May 28, 2018 22:04:37.905  DEBUG - * Scanning 101 Dalmatians
May 28, 2018 22:04:37.905  DEBUG - Looking for path match for [M:\Movies\101 Dalmatians.mkv]
May 28, 2018 22:04:37.906  DEBUG - Path matched, we’re reusing media item 2
May 28, 2018 22:04:37.908  DEBUG - * Scanning 102 Dalmatians (2000)
May 28, 2018 22:04:37.908  DEBUG - Looking for path match for [M:\Movies\102 dalmatians (2000).mkv]
The problem is that for each one, it loads file from Drive and check something (I don’t know why).
Why is downloading data if it see that the file is the same name and is unmodified date since the last update?
For this reason, the scan for the library tooks one hour.