Vfs-read-ahead not working?

What is the problem you are having with rclone?

I am not sure I get how vfs-read-chunk-size and vfs-read-ahead are working. I am using:

rclone cmount TEST: /data/test --allow-other --dir-cache-time 1000h --syslog --syslog-facility LOCAL5 --log-level INFO --poll-interval 15s --umask 002 --user-agent="TEST" --rc --cache-dir=/mnt/vfscache --vfs-cache-mode full --vfs-cache-max-size 820G --vfs-cache-max-age 336h --vfs-read-chunk-size 100M --vfs-read-chunk-size-limit 500M --vfs-read-ahead 100M --buffer-size 30M &

All works fine. However: on a fresh /mnt/vfscache I started accessing a file sequentially (test video). I watched usage with du -sh /mnt/vfscache and it looked as if 2M chunks were being used/cached. I was suspecting that at least a 100M chunk would be requested and cached (according to vfs-read-ahead 100M). Where am I going wrong?

What is your rclone version (output from rclone version)

rclone v1.54.0-DEV

  • os/arch: freebsd/amd64
  • go version: go1.15.6

Which OS you are using and how many bits (eg Windows 7, 64 bit)

FreeBSD 12.2

Which cloud storage system are you using? (eg Google Drive)

GDRIVE

Can you include the debug log file? That would have the details in it.

test.log (643.3 KB)

Here we go.

From the logs, the first read I can see happens here:

Feb 24 08:20:40 jpknas rclone[33992]: /REDACT3/REDACT1/S2: >Access: errc=0
Feb 24 08:20:40 jpknas rclone[33992]: REDACT3/REDACT1/S2/REDACT2.m4v: ChunkedReader.Read at 251916288 length 1048576 chunkOffset 241434624 chunkSize 104857600

You can see the chunksize is 104857600 or 100MB.

The rest of your log seems odd as it has a bunch of range requests rather than a normal read. Few examples:

Feb 24 08:20:41 jpknas rclone[33992]: vfs cache: looking for range={Pos:3473408 Size:131072} in [{Pos:0 Size:169865216} {Pos:176422912 Size:48230400} {Pos:241434624 Size:14675968} {Pos:328073216 Size:9433088} {Pos:341835776 Size:1740943}] - present true

Feb 24 08:20:41 jpknas rclone[33992]: vfs cache: looking for range={Pos:3735552 Size:131072} in [{Pos:0 Size:169865216} {Pos:176422912 Size:49278976} {Pos:241434624 Size:14675968} {Pos:328073216 Size:9433088} {Pos:341835776 Size:1740943}] - present true

So it's not reading the file sequentially but hopping around. A sequential read looks something like:

Feb 24 07:53:05 gemini rclone[1891]: &{100M.file (rw)}: Read: len=131072, offset=1310720
Feb 24 07:53:05 gemini rclone[1891]: &{100M.file (rw)}: Read: len=131072, offset=1441792
Feb 24 07:53:05 gemini rclone[1891]: 100M.file(0xc0052ed9c0): _readAt: size=131072, off=1310720
Feb 24 07:53:05 gemini rclone[1891]: &{100M.file (rw)}: Read: len=131072, offset=1572864
Feb 24 07:53:05 gemini rclone[1891]: &{100M.file (rw)}: Read: len=131072, offset=1703936
Feb 24 07:53:05 gemini rclone[1891]: 100M.file(0xc0052ed9c0): _readAt: size=131072, off=1441792
Feb 24 07:53:05 gemini rclone[1891]: 100M.file(0xc0052ed9c0): _readAt: size=131072, off=1572864
Feb 24 07:53:05 gemini rclone[1891]: 100M.file(0xc0052ed9c0): >_readAt: n=131072, err=<nil>
Feb 24 07:53:05 gemini rclone[1891]: &{100M.file (rw)}: >Read: read=131072, err=<nil>
Feb 24 07:53:05 gemini rclone[1891]: 100M.file(0xc0052ed9c0): >_readAt: n=131072, err=<nil>
Feb 24 07:53:05 gemini rclone[1891]: &{100M.file (rw)}: >Read: read=131072, err=<nil>
Feb 24 07:53:05 gemini rclone[1891]: 100M.file(0xc0052ed9c0): _readAt: size=131072, off=1703936
Feb 24 07:53:05 gemini rclone[1891]: 100M.file(0xc0052ed9c0): >_readAt: n=131072, err=<nil>
Feb 24 07:53:05 gemini rclone[1891]: 100M.file(0xc0052ed9c0): >_readAt: n=131072, err=<nil>
Feb 24 07:53:05 gemini rclone[1891]: &{100M.file (rw)}: >Read: read=131072, err=<nil>
Feb 24 07:53:05 gemini rclone[1891]: &{100M.file (rw)}: >Read: read=131072, err=<nil>
Feb 24 07:53:05 gemini rclone[1891]: &{100M.file (rw)}: Read: len=131072, offset=1835008
Feb 24 07:53:05 gemini rclone[1891]: &{100M.file (rw)}: Read: len=131072, offset=1966080
Feb 24 07:53:05 gemini rclone[1891]: 100M.file(0xc0052ed9c0): _readAt: size=131072, off=1835008
Feb 24 07:53:05 gemini rclone[1891]: 100M.file(0xc0052ed9c0): _readAt: size=131072, off=1966080
Feb 24 07:53:05 gemini rclone[1891]: 100M.file(0xc0052ed9c0): >_readAt: n=131072, err=<nil>
Feb 24 07:53:05 gemini rclone[1891]: &{100M.file (rw)}: >Read: read=131072, err=<nil>
Feb 24 07:53:05 gemini rclone[1891]: &{100M.file (rw)}: Read: len=131072, offset=2097152
Feb 24 07:53:05 gemini rclone[1891]: &{100M.file (rw)}: Read: len=131072, offset=2228224
Feb 24 07:53:05 gemini rclone[1891]: 100M.file(0xc0052ed9c0): _readAt: size=131072, off=2097152
Feb 24 07:53:05 gemini rclone[1891]: 100M.file(0xc0052ed9c0): _readAt: size=131072, off=2228224
Feb 24 07:53:05 gemini rclone[1891]: 100M.file: ChunkedReader.Read at 3141632 length 1048576 chunkOffset 0 chunkSize 134217728
Feb 24 07:53:05 gemini rclone[1891]: 100M.file(0xc0052ed9c0): >_readAt: n=131072, err=<nil>
Feb 24 07:53:05 gemini rclone[1891]: &{100M.file (rw)}: >Read: read=131072, err=<nil>
Feb 24 07:53:05 gemini rclone[1891]: 100M.file(0xc0052ed9c0): >_readAt: n=131072, err=<nil>
Feb 24 07:53:05 gemini rclone[1891]: &{100M.file (rw)}: >Read: read=131072, err=<nil>
Feb 24 07:53:05 gemini rclone[1891]: 100M.file(0xc0052ed9c0): >_readAt: n=131072, err=<nil>
Feb 24 07:53:05 gemini rclone[1891]: &{100M.file (rw)}: >Read: read=131072, err=<nil>
Feb 24 07:53:05 gemini rclone[1891]: &{100M.file (rw)}: Read: len=131072, offset=2359296
Feb 24 07:53:05 gemini rclone[1891]: &{100M.file (rw)}: Read: len=131072, offset=2490368
Feb 24 07:53:05 gemini rclone[1891]: 100M.file(0xc0052ed9c0): _readAt: size=131072, off=2490368
Feb 24 07:53:05 gemini rclone[1891]: 100M.file(0xc0052ed9c0): _readAt: size=131072, off=2359296
Feb 24 07:53:05 gemini rclone[1891]: 100M.file(0xc0052ed9c0): >_readAt: n=131072, err=<nil>
Feb 24 07:53:05 gemini rclone[1891]: &{100M.file (rw)}: >Read: read=131072, err=<nil>

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.