Passing fuse flags

I was trying to test with a few fuse flags to see what they did and wasn’t having much luck in passing them.

I tried:

-o auto_cache \

and got:

2018/07/09 12:05:13 ERROR : -o/--option not supported with this FUSE backend
2018/07/09 12:05:13 ERROR : --fuse-flag not supported with this FUSE backend

Is that something I can’t pass?

Yes the default fuse backend doesn’t supprort options under linux.

However if you compile from source with go build -tags cmount then rclone will gain another command called rclone cmount which behaves identically to mount, except it does use the fuse library directly. That is how mount is implemented on Windows.

rclone cmount will accept the fuse options!

You can also pick up a pre-built binary here: https://1069-17803236-gh.circle-artifacts.com/0/tmp/rclone.dist/rclone-linux-amd64 - circleci builds these on every commit.

That works.

I wanted to see if auto_cache helped with keeping things in memory like it does for mergerfs and it makes the repeat calls insanely fast.

Going to do some testing and see what else works better or not.

One thing I noticed in my testing was the chunkedReader size with vfs changes to much larger sizes faster:

2018/07/09 14:12:08 DEBUG : Radarr_Movies/Dracula (1992)/Dracula (1992).mkv: ChunkedReader.Read at 33237803008 length 8192 chunkOffset 33237798912 chunkSize 67108864
2018/07/09 14:12:08 DEBUG : Radarr_Movies/Dracula (1992)/Dracula (1992).mkv: ChunkedReader.Read at 33237811200 length 16384 chunkOffset 33237798912 chunkSize 67108864
2018/07/09 14:12:08 DEBUG : Radarr_Movies/Dracula (1992)/Dracula (1992).mkv: ChunkedReader.Read at 33237827584 length 32768 chunkOffset 33237798912 chunkSize 67108864
2018/07/09 14:12:08 DEBUG : Radarr_Movies/Dracula (1992)/Dracula (1992).mkv: ChunkedReader.Read at 33237860352 length 65536 chunkOffset 33237798912 chunkSize 67108864
2018/07/09 14:12:08 DEBUG : Radarr_Movies/Dracula (1992)/Dracula (1992).mkv: ChunkedReader.Read at 33237925888 length 131072 chunkOffset 33237798912 chunkSize 67108864
2018/07/09 14:12:08 DEBUG : Radarr_Movies/Dracula (1992)/Dracula (1992).mkv: ChunkedReader.Read at 33238056960 length 262144 chunkOffset 33237798912 chunkSize 67108864

Yet, if I use the cache, I seem to get stuck at 13M or so.

2018/07/09 14:14:03 DEBUG : Radarr_Movies/Dracula (1992)/Dracula (1992).mkv: ChunkedReader.Read at 30277632 length 131072 chunkOffset 0 chunkSize -1
2018/07/09 14:14:03 DEBUG : /Radarr_Movies/Dracula (1992)/Dracula (1992).mkv: >Read: n=131072
2018/07/09 14:14:03 DEBUG : /Radarr_Movies/Dracula (1992)/Dracula (1992).mkv: Getattr: fh=0x0
2018/07/09 14:14:03 DEBUG : /Radarr_Movies/Dracula (1992)/Dracula (1992).mkv: >Getattr: errc=0
2018/07/09 14:14:03 DEBUG : /Radarr_Movies/Dracula (1992)/Dracula (1992).mkv: Read: ofst=30408704, fh=0x0
2018/07/09 14:14:03 DEBUG : Radarr_Movies/Dracula (1992)/Dracula (1992).mkv: ChunkedReader.Read at 30408704 length 131072 chunkOffset 0 chunkSize -1
2018/07/09 14:14:03 DEBUG : /Radarr_Movies/Dracula (1992)/Dracula (1992).mkv: >Read: n=131072
2018/07/09 14:14:03 DEBUG : /Radarr_Movies/Dracula (1992)/Dracula (1992).mkv: Getattr: fh=0x0
2018/07/09 14:14:03 DEBUG : /Radarr_Movies/Dracula (1992)/Dracula (1992).mkv: >Getattr: errc=0
2018/07/09 14:14:03 DEBUG : /Radarr_Movies/Dracula (1992)/Dracula (1992).mkv: Read: ofst=30539776, fh=0x0
2018/07/09 14:14:03 DEBUG : Radarr_Movies/Dracula (1992)/Dracula (1992).mkv: ChunkedReader.Read at 30539776 length 131072 chunkOffset 0 chunkSize -1
2018/07/09 14:14:03 DEBUG : /Radarr_Movies/Dracula (1992)/Dracula (1992).mkv: >Read: n=131072
2018/07/09 14:14:03 DEBUG : /Radarr_Movies/Dracula (1992)/Dracula (1992).mkv: Getattr: fh=0x0
2018/07/09 14:14:03 DEBUG : /Radarr_Movies/Dracula (1992)/Dracula (1992).mkv: >Getattr: errc=0
2018/07/09 14:14:03 DEBUG : /Radarr_Movies/Dracula (1992)/Dracula (1992).mkv: Read: ofst=30670848, fh=0x0
2018/07/09 14:14:03 DEBUG : Radarr_Movies/Dracula (1992)/Dracula (1992).mkv: ChunkedReader.Read at 30670848 length 131072 chunkOffset 0 chunkSize -1
2018/07/09 14:14:03 DEBUG : /Radarr_Movies/Dracula (1992)/Dracula (1992).mkv: >Read: n=131072
2018/07/09 14:14:03 DEBUG : /Radarr_Movies/Dracula (1992)/Dracula (1992).mkv: Getattr: fh=0x0
2018/07/09 14:14:03 DEBUG : /Radarr_Movies/Dracula (1992)/Dracula (1992).mkv: >Getattr: errc=0
2018/07/09 14:14:03 DEBUG : /Radarr_Movies/Dracula (1992)/Dracula (1992).mkv: Read: ofst=33237798912, fh=0x0

It seems to be quite a number of calls since it isn’t scaling up the size.

Ah, that is because cache fetches the individual chunks it wants at the sizes it wants I think…

That is a leftover from before @B4dM4n’s magical increasing fetches.

It probably would be fairly easy to convert - the code is here: https://github.com/ncw/rclone/blob/master/backend/cache/handle.go#L364

@remus.bunduc & @B4dM4n thoughts?

What are these messages indicating?

2018/07/09 13:10:15 DEBUG : s1n6gn87oo537vvls13nahl6co/k2p2ks0ttvbig6dg5gr6tj0t5c/8v0ogd3hi1g3oeq9b86nfd1m7jedpj9r506v1ouoo352kvfa4oc0: 0: chunk retry storage: 0
2018/07/09 13:10:16 DEBUG : s1n6gn87oo537vvls13nahl6co/k2p2ks0ttvbig6dg5gr6tj0t5c/8v0ogd3hi1g3oeq9b86nfd1m7jedpj9r506v1ouoo352kvfa4oc0: 0: chunk retry storage: 1
2018/07/09 13:10:16 DEBUG : s1n6gn87oo537vvls13nahl6co/k2p2ks0ttvbig6dg5gr6tj0t5c/8v0ogd3hi1g3oeq9b86nfd1m7jedpj9r506v1ouoo352kvfa4oc0: 0: chunk retry storage: 2

The chunkedReaders really don’t seem to cause any time as the reads all happen within the same second, albeit a lot of calls.

I’m trying to understand the difference between the VFS and the Cache read of the file as the same operation in cache takes 7-8 seconds for a mediainfo, while the VFS read of the same file is 2.5 seconds.

I lose like 2 seconds here:

2018/07/09 13:10:15 DEBUG : Radarr_Movies/Dracula (1992)/Dracula (1992).mkv: ChunkedReader.openRange at 0 length -1
2018/07/09 13:10:15 DEBUG : /: Statfs:
2018/07/09 13:10:15 DEBUG : /: >Statfs: stat={Bsize:4096 Frsize:4096 Blocks:1125899906842624 Bfree:1125888146525893 Bavail:1125899906842624 Files:1000000000 Ffree:1000000000 Favail:0 Fsid:0 Flag:0 Namemax:255}, errc=0
2018/07/09 13:10:15 DEBUG : s1n6gn87oo537vvls13nahl6co/k2p2ks0ttvbig6dg5gr6tj0t5c/8v0ogd3hi1g3oeq9b86nfd1m7jedpj9r506v1ouoo352kvfa4oc0: 0: chunk retry storage: 0
2018/07/09 13:10:16 DEBUG : s1n6gn87oo537vvls13nahl6co/k2p2ks0ttvbig6dg5gr6tj0t5c/8v0ogd3hi1g3oeq9b86nfd1m7jedpj9r506v1ouoo352kvfa4oc0: 0: chunk retry storage: 1
2018/07/09 13:10:16 DEBUG : /: Statfs:
2018/07/09 13:10:16 DEBUG : /: >Statfs: stat={Bsize:4096 Frsize:4096 Blocks:1125899906842624 Bfree:1125888146525893 Bavail:1125899906842624 Files:1000000000 Ffree:1000000000 Favail:0 Fsid:0 Flag:0 Namemax:255}, errc=0
2018/07/09 13:10:16 DEBUG : s1n6gn87oo537vvls13nahl6co/k2p2ks0ttvbig6dg5gr6tj0t5c/8v0ogd3hi1g3oeq9b86nfd1m7jedpj9r506v1ouoo352kvfa4oc0: 0: chunk retry storage: 2
2018/07/09 13:10:17 DEBUG : worker-0 <8v0ogd3hi1g3oeq9b86nfd1m7jedpj9r506v1ouoo352kvfa4oc0>: downloaded chunk 0
2018/07/09 13:10:17 DEBUG : Radarr_Movies/Dracula (1992)/Dracula (1992).mkv: ChunkedReader.Read at 0 length 131072 chunkOffset 0 chunkSize -1

and seemingly another 1 second here:

}, errc=0
2018/07/09 13:10:20 DEBUG : s1n6gn87oo537vvls13nahl6co/k2p2ks0ttvbig6dg5gr6tj0t5c/8v0ogd3hi1g3oeq9b86nfd1m7jedpj9r506v1ouoo352kvfa4oc0: moving offset set from 0 to 33245876768
2018/07/09 13:10:20 DEBUG : s1n6gn87oo537vvls13nahl6co/k2p2ks0ttvbig6dg5gr6tj0t5c/8v0ogd3hi1g3oeq9b86nfd1m7jedpj9r506v1ouoo352kvfa4oc0: 33218887680: chunk retry storage: 0
2018/07/09 13:10:21 DEBUG : s1n6gn87oo537vvls13nahl6co/k2p2ks0ttvbig6dg5gr6tj0t5c/8v0ogd3hi1g3oeq9b86nfd1m7jedpj9r506v1ouoo352kvfa4oc0: 33218887680: chunk retry storage: 1
2018/07/09 13:10:21 DEBUG : /: Statfs:
2018/07/09 13:10:21 DEBUG : /: >Statfs: stat={Bsize:4096 Frsize:4096 Blocks:1125899906842624 Bfree:1125888146525893 Bavail:1125899906842624 Files:1000000000 Ffree:1000000000 Favail:0 Fsid:0 Flag:0 Namemax:255}, errc=0
2018/07/09 13:10:21 DEBUG : worker-3 <8v0ogd3hi1g3oeq9b86nfd1m7jedpj9r506v1ouoo352kvfa4oc0>: partial downloaded chunk 30.938G
2018/07/09 13:10:21 DEBUG : /Radarr_Movies/Dracula (1992)/Dracula (1992).mkv: >Read: n=131072
2018/07/09 13:10:21 DEBUG : /Radarr_Movies/Dracula (1992)/Dracula (1992).mkv: Read: ofst=33237929984, fh=0x0

That’s the only jumps in time I can when I compare the DEBUG logs from VFS to cache.

It seems like the larger chunk-size accompanied with more workers, makes the problem worse as all the workers spawn a request to get their data and it waits for the chunks to get downloaded.

I was testing with a single worker and to get roughly 3 chunks and a partial one at 10M chunk size, it took me almost 4 full seconds for at most 40MB of data.

```
2018/07/09 22:52:10 DEBUG : worker-0 <8v0ogd3hi1g3oeq9b86nfd1m7jedpj9r506v1ouoo352kvfa4oc0>: downloaded chunk 0
2018/07/09 22:52:11 DEBUG : worker-0 <8v0ogd3hi1g3oeq9b86nfd1m7jedpj9r506v1ouoo352kvfa4oc0>: downloaded chunk 20971520
2018/07/09 22:52:13 DEBUG : worker-0 <8v0ogd3hi1g3oeq9b86nfd1m7jedpj9r506v1ouoo352kvfa4oc0>: downloaded chunk 33218887680
2018/07/09 22:52:13 DEBUG : worker-0 <8v0ogd3hi1g3oeq9b86nfd1m7jedpj9r506v1ouoo352kvfa4oc0>: partial downloaded chunk 30.957G
```

That a snippet from my log. In the handle.go of the code, it seems there is a * 500 ms wait in there, which was causing a lot of my delay:

			fs.Debugf(r, "%v: chunk retry storage: %v", chunkStart, i)
			time.Sleep(time.Millisecond * 500)

I tweaked that down to 100 and shaved a few seconds off my time along with making the chunk size 10M rather than having it something larger.

I can’t see anything wrong in the logs.
But they are a good summary of many features used in the VFS layer :wink:

What you are observing in this log is the “slow start” behavior of --buffer-size.

In memory buffering is implemented in AsyncReader. It is currently using 1M blocks of memory.
A block can only be filled or consumed at a time. To avoid waiting for a full 1M block to be filled when opening a file, AsyncReader will only write 4k into the first block and allow it to be read as fast as possible.
AsyncReader doubles this value for each block, until the max of 1M is reached.

When the “slow start” phase is done, you should only see ChunkedReader.Read logs of length 1048576.

The ChunkedReader.Read logging is a bit to verbose and maybe confusing. I will explain the individual fields:

ChunkedReader.Read at 30670848 length 131072 chunkOffset 0 chunkSize -1
  • at 30670848 is the absolute byte offset in the file from the beginning. This is the first byte read by this call (0 based).
  • length 131072 is the length in bytes of the buffer given to the Read function. Read will fill the buffer completely, unless an error occurs or EOF is reached. When --buffer-size 0 is set, this number comes from the kernel and is probably dependent on the --max-read-ahead value which is 128k by default (128×1024=131072). Otherwise the buffer comes from a AsyncReader which is implementing --buffer-size and can be up to 1M in size.
  • chunkOffset 0 is the absolute byte offset in the file of the currently processed chunk. This number is redundant and could be removed. This is the same value as the at value in the last ChunkedReader.openRange at %d length %d message.
  • chunkSize -1 is the length in bytes of the currently processed chunk. A value of -1 means chunked reading is disabled (--vfs-read-chunk-size 0). This is the number which shows chunk growth and will increase up to --vfs-read-chunk-size-limit.

The amount of read calls in the second log is normal and is originating from the kernel FUSE handling.
Based on this log I guess --vfs-read-chunk-size 0 and --buffer-size 0 were used. Using this combination you get the most direct path from the kernel to the cache backend.

Using a small --buffer-size of 2M (1M could have waiting issues because there is only one block for reading and writing, but I din’t verify this) maybe helps with the chunk retry storage messages, but I don’t know what they really mean.

1 Like

Thanks as that was very helpful.

What I was noticing on the workers is that they all seem to try to grab a chunk at the same time so if the worker count is high, it slows down the overall process as they are all competing for the chunks.

I was able to set the vfs-read-chunk-size and that did show up properly in the debug as the length.

I tried playing around with a lot of the values, but wasn’t able to make any material change in the mediainfo of a file as it still takes anywhere from 5-10 seconds.

I just don’t understand why it takes a second or two to pull down a 32MB chunk.