Files written in chunk of 131072 bytes

What is the problem you are having with rclone?

It's not really a problem, i'm curious about why files are read and written in chunks of 128kb/s in an rclone mount. Note that I'm not having a slow performance problem, I just wonder if there is an impact at all.
It seems this "block size" is between the program accessing files in the mount and FUSE itself (?). I wonder if it is possible to change this (maybe changing how rclone talks with fuse) and if changing it leads to performance changes.
Well I'm just curious about what is this "magic number" lol

What is your rclone version (output from rclone version)

rclone v1.57.0-DEV

  • os/version: debian 11.0 (64 bit)
  • os/kernel: 5.10.60-v8+ (aarch64)
  • os/type: linux
  • os/arch: arm64
  • go/version: go1.15.9
  • go/linking: dynamic
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

GDrive, but I guess it happens on every remote

The command you were trying to run (eg rclone copy /tmp remote:tmp)

/usr/bin/rclone mount 'drive:' /home/dietpi/drive/ --use-mmap --checkers 4 --transfers 2 --tpslimit 10  --vfs-cache-mode writes --dir-cache-time=3h --poll-interval 15m --drive-chunk-size 128M --vfs-write-back 1h --vfs-read-chunk-size 8M --vfs-read-chunk-size-limit 1G --buffer-size 32M --vfs-cache-max-age 6h --allow-other  -vv --stats 1h --bwlimit 1M:off

A log from the command with the -vv flag

Just moving a large file into the mount with mv largefile drive/

DEBUG : &{largefile (rw)}: >Write: written=131072, err=<nil>
DEBUG : &{largefile (rw)}: Write: len=131072, offset=7312244736
DEBUG : largefile(0x4002018e80): _writeAt: size=131072, off=7312244736

I believe to make changes like that you need to custom compile a kernel with fuse changes in the source.

I doubt you'd see much change unless you are pushing massive IO and have the IOPS to support it.

1 Like

@darthShadow - I feel like that is something you looked at before if I can still remember things but I could be wrong if you can validate my info.

128k is the largest size the kernel will use for transfers by default.

@darthShadow had a go at fixing this but we didn't merge it yet because of various compatibility problems if I remember rightly.

1 Like

Yep, both of our mount libraries have issues preventing their integration for increasing the max-pages (thus increasing the read & write chunk). cgofuse is using libfuse 2 which doesn't have support for max-pages whereas development on bazil/fuse looks to be abandoned.

It anyway doesn't matter much for cloud mounts where the latency is the limiting factor by far rather than any kind of chunk size locally.

perfect! thanks everybody :smiley:

1 Like

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.