New Feature: vfs-read-chunk-size

Oh I forgot you guys needed cache for the upload ability :smiley: Took me a minute to think about it.

I was using the direct write to the mount without the cache. No issues here.

Thanks, for that. Playing around with these options helped me a lot.

Now, I did some tests seeding torrents both with the vfs-read-chunk-size and with a cache. In both cases I managed to resolve the huge downstream traffic problem I mentioned earlier, but had no further success (torrent upload speed was very low)

I may try again, once I have some more free time, and find some ideal torrents to test on (possibly some linux distros or something).

The main takeaway from my tests is that itā€™s not just an issue of optimizing rclone settings. Identifying a torrent client that minimizes disk access is also a big part of solving this problem.

I use Transmission, and observed, during my tests, that it does a lot of ā€œunneededā€ disk access (I also tested with transmissionā€™s prefetch disabled), especially when starting/stopping torrents.

Maybe, other clients will behave better in this use case.

Iā€™m trying to use it for Plex library, but when I search for new files, it starts again to load all the files, is there any solution?

Thanks

@greenxeyezz

I have tested the new vfs-read-chunk feature this afternoon.
I come from a GD > CACHE > CRYPT/MOUNT setup and moved it in an GD -> CRYPT/MOUNT setup.

what I have used ā€¦

    --umask 0000 \
    --default-permissions \
    --allow-non-empty \
    --allow-other \
    --buffer-size 1G \
    --dir-cache-time 12h \
    --vfs-read-chunk-size 32M \
    --vfs-read-chunk-size-limit 2G \

what can I say: works better than I thought. starting a stream ist much faster and 4k movies with high bitrates (>80 MB/s) playing very well. My lokal KODI buffer fills a bit faster than before too. For the moment Iā€™m very happy with the non ā€œcache remoteā€ setup.

3 Likes

Thanks for this. Will do a quick test mount when I get home (or tomorrow). I still feel itā€™s a limitation with the onboard NIC of the Sony xbr than the mount.

Can you share your mount and what you are doing?

Did you want the buffer size that big based on the earlier posts?

Thank you beredim for sharing your experience.

Indeed. For this part, my experience is that rtorrent 0.9.6 consumes the most times of IO as compared to other clients. Also found the following issue lists for rtorrent:


However rtorrent is also the client that can seed the most torrents at the same timeā€¦ The other clients, such as Deluge, has trouble seeding thousands of torrents stably.

To overcome this problem, I revised the rtorrent.rc of the rtorrent instances I use to seed from GD based on https://github.com/rakshasa/rtorrent/issues/443, as follows:
pieces.preload.type.set = 2
#pieces.preload.min_size.set = 1
#pieces.preload.min_rate.set = 1

It is said to require a bit less IO, but I am unstable to see obvious upload speed improvement (using GD).

1 Like

A really simple command mount:

rclone mount GD: Y: --buffer-size 64M --vfs-read-chunk-size 16M --vfs-read-chunk-size-limit 64M

the extrange behaviour is that one folder doesnā€™t load the files again, and others it does :frowning:

If you donā€™t add a higher dir-cache time, it only keeps the cache for 5 minutes:

ā€“dir-cache-time duration Time to cache directory entries for. (default 5m0s)

Iā€™ve been using:

/usr/bin/rclone mount gcrypt: /gmedia --allow-other --dir-cache-time 672h --vfs-cache-max-age 675h --vfs-read-chunk-size 32M --vfs-read-chunk-size-limit 1G --buffer-size 16M --syslog --umask 002 --rc --log-level INFO

1 Like

But itā€™s very strange, there are other folders that doesnā€™t load all the files again.

Iā€™m not sure exactly what you mean that it doesnā€™t load folders. Are you running it as the plex user if you arenā€™t using allow-other? Can you share an example?

When I scan a library with 300 files Ā±

It shows inmediatly:

2018-05-28%2012_34_12-Window

But when I scan another library with 1500 files, it starts loading all the files again to check everything. And it takes some hours.

Both folders are untouched since the last scan.

Sounds like you are having permission issues. What are you running rclone as to do the mount?

The two folder are in the same parent folder so I dont know what are happening

What user are you running rclone as and what are the permissions on the folder?

Im running it on Windows, the user is admin and I run it as service.

The extrange thing is:

Parent folder:
/folderA 400 files (libraryA)
/folderB 1500 files(libraeyB)

FolderA works fine, folderB is where it scans all the files again and again en each ā€œupdateā€ of the library

It might be worth looking at the plex log files to see if they offer any clues. I would have a look at the ā€œPlex Media Scanner.logā€ as well as the main ā€œPlex Media Server.logā€

I just check the log and it starts this way in every scan in the big folderā€¦

May 28, 2018 22:04:37.903 [7964] DEBUG - Path matched, weā€™re reusing media item 1
May 28, 2018 22:04:37.905 [7964] DEBUG - * Scanning 101 Dalmatians
May 28, 2018 22:04:37.905 [7964] DEBUG - Looking for path match for [M:\Movies\101 Dalmatians.mkv]
May 28, 2018 22:04:37.906 [7964] DEBUG - Path matched, weā€™re reusing media item 2
May 28, 2018 22:04:37.908 [7964] DEBUG - * Scanning 102 Dalmatians (2000)
May 28, 2018 22:04:37.908 [7964] DEBUG - Looking for path match for [M:\Movies\102 dalmatians (2000).mkv]

The problem is that for each one, it loads file from Drive and check something (I donā€™t know why).

Why is downloading data if it see that the file is the same name and is unmodified date since the last update?

For this reason, the scan for the library tooks one hour. :frowning: