Achieving s3fs performance with rclone mount

I’m having difficulty achieving performance comparable to s3fs with rclone mount. I have a large ISO file on Google Storage that I mount from a drive mounted with rclone mount. s3fs performance with this approach has been stellar but they only offer a Linux and macOS client and I need the complete cross-platform support offered by rclone.

I’m wondering if anyone might know particular settings I can try to better match s3fs performance. I’ve been playing around with --vfs-read-chunk-size and --buffer-size to no avail.

I should note I do no writing to GCS. I only read chunks of the ISO, as needed, and the ISO is mounted read-only.

What’s your version and mount command you are using? Can you include a debug log “-vv”?

v1.47.0. The -vv log is quite long because I read files out of the ISO after I mount it. Is it still of interest?

I can provide a section of the rclone and s3fs debug logs if that’s helpful.

Using:
sudo rclone --config=$HOME/rclone.conf mount gcs:my-images/Linux /mnt/images

Then:
sudo mount -o loop,ro,noatime,noiversion,_netdev /mnt/images/myimage.iso /mnt/mounted-iso

What’s the performance you are seeing using that? The debug might shed a little light if something doesn’t seem right.

rclone mount appears to perform about 4 times worse than s3fs. I have an application in the ISO that I execute.

Here’s the head of the log from mounting off of rclone:

2019/04/19 17:16:31 DEBUG : rclone: Version "v1.47.0" starting with parameters ["rclone" "--config=/home/travis/rclone.conf" "mount" "-vv" "--read-only" "gcs:my-images/Linux" "/mnt/images"]
2019/04/19 17:16:31 DEBUG : Using RCLONE_CONFIG_PASS password.
2019/04/19 17:16:31 DEBUG : Using config file from "/home/travis/rclone.conf"
2019/04/19 17:16:32 DEBUG : Storage bucket my-images path Linux/: Mounting on "/mnt/images"
2019/04/19 17:16:32 INFO  : Storage bucket my-images path Linux/: poll-interval is not supported by this remote
2019/04/19 17:16:32 DEBUG : Adding path "vfs/forget" to remote control registry
2019/04/19 17:16:32 DEBUG : Adding path "vfs/refresh" to remote control registry
2019/04/19 17:16:32 DEBUG : Adding path "vfs/poll-interval" to remote control registry
2019/04/19 17:16:32 DEBUG : : Root: 
2019/04/19 17:16:32 DEBUG : : >Root: node=/, err=<nil>
2019/04/19 17:16:50 DEBUG : /: Attr: 
2019/04/19 17:16:50 DEBUG : /: >Attr: attr=valid=1s ino=0 size=0 mode=drwxr-xr-x, err=<nil>
2019/04/19 17:16:50 DEBUG : /: Lookup: name="myimage.iso"
2019/04/19 17:16:50 DEBUG : /: >Lookup: node=myimage.iso, err=<nil>
2019/04/19 17:16:50 DEBUG : myimage.iso: Attr: 
2019/04/19 17:16:50 DEBUG : myimage.iso: >Attr: a=valid=1s ino=0 size=14872707072 mode=-rw-r--r--, err=<nil>
2019/04/19 17:16:50 DEBUG : myimage.iso: Open: flags=OpenReadOnly
2019/04/19 17:16:50 DEBUG : myimage.iso: Open: flags=O_RDONLY
2019/04/19 17:16:50 DEBUG : myimage.iso: >Open: fd=myimage.iso (r), err=<nil>
2019/04/19 17:16:50 DEBUG : myimage.iso: >Open: fh=&{myimage.iso (r)}, err=<nil>
2019/04/19 17:16:50 DEBUG : &{myimage.iso (r)}: Flush: 
2019/04/19 17:16:50 DEBUG : &{myimage.iso (r)}: >Flush: err=<nil>
2019/04/19 17:16:50 DEBUG : &{myimage.iso (r)}: Read: len=4096, offset=14872608768
2019/04/19 17:16:50 DEBUG : myimage.iso: ChunkedReader.openRange at 0 length 134217728
2019/04/19 17:16:50 DEBUG : myimage.iso: ReadFileHandle.seek from 0 to 14872608768 (fs.RangeSeeker)
2019/04/19 17:16:50 DEBUG : myimage.iso: ChunkedReader.RangeSeek from 0 to 14872608768 length -1
2019/04/19 17:16:50 DEBUG : myimage.iso: ChunkedReader.Read at -1 length 4096 chunkOffset 14872608768 chunkSize 134217728
2019/04/19 17:16:50 DEBUG : myimage.iso: ChunkedReader.openRange at 14872608768 length 134217728
2019/04/19 17:16:50 DEBUG : &{myimage.iso (r)}: >Read: read=4096, err=<nil>
2019/04/19 17:16:50 DEBUG : &{myimage.iso (r)}: Read: len=4096, offset=14872698880
2019/04/19 17:16:50 DEBUG : myimage.iso: ChunkedReader.Read at 14872612864 length 8192 chunkOffset 14872608768 chunkSize 134217728
2019/04/19 17:16:50 DEBUG : myimage.iso: ReadFileHandle.seek from 14872612864 to 14872698880 (fs.RangeSeeker)
2019/04/19 17:16:50 DEBUG : myimage.iso: ChunkedReader.RangeSeek from 14872621056 to 14872698880 length -1
2019/04/19 17:16:50 DEBUG : myimage.iso: ChunkedReader.Read at -1 length 4096 chunkOffset 14872698880 chunkSize 134217728
2019/04/19 17:16:50 DEBUG : myimage.iso: ChunkedReader.openRange at 14872698880 length 134217728
2019/04/19 17:16:50 DEBUG : myimage.iso: ChunkedReader.Read at 14872702976 length 8192 chunkOffset 14872698880 chunkSize 134217728
2019/04/19 17:16:50 DEBUG : &{myimage.iso (r)}: >Read: read=4096, err=<nil>
2019/04/19 17:16:50 DEBUG : &{myimage.iso (r)}: Read: len=16384, offset=0
2019/04/19 17:16:50 DEBUG : myimage.iso: ReadFileHandle.seek from 14872702976 to 0 (fs.RangeSeeker)
2019/04/19 17:16:50 DEBUG : myimage.iso: ChunkedReader.RangeSeek from 14872707072 to 0 length -1
2019/04/19 17:16:50 DEBUG : myimage.iso: ChunkedReader.Read at -1 length 4096 chunkOffset 0 chunkSize 134217728
2019/04/19 17:16:50 DEBUG : myimage.iso: ChunkedReader.openRange at 0 length 134217728
2019/04/19 17:16:50 DEBUG : myimage.iso: ChunkedReader.Read at 4096 length 8192 chunkOffset 0 chunkSize 134217728
2019/04/19 17:16:50 DEBUG : myimage.iso: ChunkedReader.Read at 12288 length 16384 chunkOffset 0 chunkSize 134217728
2019/04/19 17:16:50 DEBUG : myimage.iso: ChunkedReader.Read at 28672 length 32768 chunkOffset 0 chunkSize 134217728
2019/04/19 17:16:50 DEBUG : &{myimage.iso (r)}: >Read: read=16384, err=<nil>
2019/04/19 17:16:50 DEBUG : &{myimage.iso (r)}: Read: len=32768, offset=16384
2019/04/19 17:16:50 DEBUG : myimage.iso: ChunkedReader.Read at 61440 length 65536 chunkOffset 0 chunkSize 134217728
2019/04/19 17:16:50 DEBUG : &{myimage.iso (r)}: >Read: read=32768, err=<nil>
2019/04/19 17:16:50 DEBUG : &{myimage.iso (r)}: Read: len=4096, offset=14872702976
2019/04/19 17:16:50 DEBUG : myimage.iso: ReadFileHandle.seek from 49152 to 14872702976 (fs.RangeSeeker)
2019/04/19 17:16:50 DEBUG : myimage.iso: ChunkedReader.RangeSeek from 126976 to 14872702976 length -1
2019/04/19 17:16:50 DEBUG : myimage.iso: ChunkedReader.Read at -1 length 4096 chunkOffset 14872702976 chunkSize 134217728
2019/04/19 17:16:50 DEBUG : myimage.iso: ChunkedReader.openRange at 14872702976 length 134217728
2019/04/19 17:16:51 DEBUG : myimage.iso: ChunkedReader.Read at 14872707072 length 8192 chunkOffset 14872702976 chunkSize 134217728
2019/04/19 17:16:51 DEBUG : &{myimage.iso (r)}: >Read: read=4096, err=<nil>
2019/04/19 17:16:51 DEBUG : &{myimage.iso (r)}: Read: len=4096, offset=14872571904
2019/04/19 17:16:51 DEBUG : myimage.iso: ReadFileHandle.seek from 14872707072 to 14872571904 (fs.RangeSeeker)
2019/04/19 17:16:51 DEBUG : myimage.iso: ChunkedReader.RangeSeek from 14872707072 to 14872571904 length -1
2019/04/19 17:16:51 DEBUG : myimage.iso: ChunkedReader.Read at -1 length 4096 chunkOffset 14872571904 chunkSize 134217728
2019/04/19 17:16:51 DEBUG : myimage.iso: ChunkedReader.openRange at 14872571904 length 134217728
2019/04/19 17:16:51 DEBUG : myimage.iso: ChunkedReader.Read at 14872576000 length 8192 chunkOffset 14872571904 chunkSize 134217728
2019/04/19 17:16:51 DEBUG : &{myimage.iso (r)}: >Read: read=4096, err=<nil>
2019/04/19 17:16:51 DEBUG : &{myimage.iso (r)}: Read: len=4096, offset=14872674304
2019/04/19 17:16:51 DEBUG : myimage.iso: ChunkedReader.Read at 14872584192 length 16384 chunkOffset 14872571904 chunkSize 134217728
2019/04/19 17:16:51 DEBUG : myimage.iso: ChunkedReader.Read at 14872600576 length 32768 chunkOffset 14872571904 chunkSize 134217728
2019/04/19 17:16:51 DEBUG : myimage.iso: ChunkedReader.Read at 14872633344 length 65536 chunkOffset 14872571904 chunkSize 134217728
2019/04/19 17:16:51 DEBUG : myimage.iso: ChunkedReader.Read at 14872698880 length 131072 chunkOffset 14872571904 chunkSize 134217728
2019/04/19 17:16:51 DEBUG : myimage.iso: ReadFileHandle.seek from 14872576000 to 14872674304 (fs.RangeSeeker)
2019/04/19 17:16:51 DEBUG : myimage.iso: ChunkedReader.RangeSeek from 14872707072 to 14872674304 length -1
2019/04/19 17:16:51 DEBUG : myimage.iso: ChunkedReader.Read at -1 length 4096 chunkOffset 14872674304 chunkSize 134217728
2019/04/19 17:16:51 DEBUG : myimage.iso: ChunkedReader.openRange at 14872674304 length 134217728
2019/04/19 17:16:51 DEBUG : myimage.iso: ChunkedReader.Read at 14872678400 length 8192 chunkOffset 14872674304 chunkSize 134217728
2019/04/19 17:16:51 DEBUG : myimage.iso: ChunkedReader.Read at 14872686592 length 16384 chunkOffset 14872674304 chunkSize 134217728
2019/04/19 17:16:51 DEBUG : &{myimage.iso (r)}: >Read: read=4096, err=<nil>
2019/04/19 17:16:51 DEBUG : myimage.iso: ChunkedReader.Read at 14872702976 length 32768 chunkOffset 14872674304 chunkSize 134217728
2019/04/19 17:16:51 DEBUG : &{myimage.iso (r)}: Read: len=4096, offset=14872576000
2019/04/19 17:16:51 DEBUG : myimage.iso: ReadFileHandle.seek from 14872678400 to 14872576000 (fs.RangeSeeker)
2019/04/19 17:16:51 DEBUG : myimage.iso: ChunkedReader.RangeSeek from 14872707072 to 14872576000 length -1
2019/04/19 17:16:51 DEBUG : myimage.iso: ChunkedReader.Read at -1 length 4096 chunkOffset 14872576000 chunkSize 134217728
2019/04/19 17:16:51 DEBUG : myimage.iso: ChunkedReader.openRange at 14872576000 length 134217728
2019/04/19 17:16:51 DEBUG : myimage.iso: ChunkedReader.Read at 14872580096 length 8192 chunkOffset 14872576000 chunkSize 134217728
2019/04/19 17:16:51 DEBUG : myimage.iso: ChunkedReader.Read at 14872588288 length 16384 chunkOffset 14872576000 chunkSize 134217728
2019/04/19 17:16:51 DEBUG : &{myimage.iso (r)}: >Read: read=4096, err=<nil>
2019/04/19 17:16:51 DEBUG : myimage.iso: ChunkedReader.Read at 14872604672 length 32768 chunkOffset 14872576000 chunkSize 134217728
2019/04/19 17:16:51 DEBUG : &{myimage.iso (r)}: Read: len=4096, offset=14872502272
2019/04/19 17:16:51 DEBUG : myimage.iso: ChunkedReader.Read at 14872637440 length 65536 chunkOffset 14872576000 chunkSize 134217728
2019/04/19 17:16:51 DEBUG : myimage.iso: ReadFileHandle.seek from 14872580096 to 14872502272 (fs.RangeSeeker)
2019/04/19 17:16:51 DEBUG : myimage.iso: ChunkedReader.RangeSeek from 14872702976 to 14872502272 length -1
2019/04/19 17:16:51 DEBUG : myimage.iso: ChunkedReader.Read at -1 length 4096 chunkOffset 14872502272 chunkSize 134217728

Here’s the head of the log from mounting off of s3fs:

FUSE library version: 2.9.4
nullpath_ok: 0
nopath: 0
utime_omit_ok: 0
unique: 1, opcode: INIT (26), nodeid: 0, insize: 56, pid: 0
INIT: 7.26
flags=0x001ffffb
max_readahead=0x00020000
   INIT: 7.19
   flags=0x00000019
   max_readahead=0x00020000
   max_write=0x00020000
   max_background=0
   congestion_threshold=0
   unique: 1, success, outsize: 40
unique: 2, opcode: GETATTR (3), nodeid: 1, insize: 56, pid: 16654
getattr /
   unique: 2, success, outsize: 120
unique: 3, opcode: LOOKUP (1), nodeid: 1, insize: 46, pid: 16654
LOOKUP /Linux
getattr /Linux
   NODEID: 2
   unique: 3, success, outsize: 144
unique: 4, opcode: LOOKUP (1), nodeid: 2, insize: 57, pid: 16654
LOOKUP /Linux/myimage.iso
getattr /Linux/myimage.iso
   NODEID: 3
   unique: 4, success, outsize: 144
unique: 5, opcode: OPEN (14), nodeid: 3, insize: 48, pid: 16654
open flags: 0x8000 /Linux/myimage.iso
   open[6] flags: 0x8000 /Linux/myimage.iso
   unique: 5, success, outsize: 32
unique: 6, opcode: FLUSH (25), nodeid: 3, insize: 64, pid: 16654
unique: 7, opcode: READ (15), nodeid: 3, insize: 80, pid: 16656
read[6] 4096 bytes from 14872608768 flags: 0x8000
flush[6]
   read[6] 4096 bytes from 14872608768
   unique: 7, success, outsize: 4112
unique: 8, opcode: READ (15), nodeid: 3, insize: 80, pid: 16656
read[6] 4096 bytes from 14872698880 flags: 0x8000
   unique: 6, success, outsize: 16
   read[6] 4096 bytes from 14872698880
   unique: 8, success, outsize: 4112
unique: 9, opcode: READ (15), nodeid: 3, insize: 80, pid: 16656
read[6] 16384 bytes from 0 flags: 0x8000
   read[6] 16384 bytes from 0
   unique: 9, success, outsize: 16400
unique: 10, opcode: READ (15), nodeid: 3, insize: 80, pid: 16656
read[6] 32768 bytes from 16384 flags: 0x8000
   read[6] 32768 bytes from 16384
   unique: 10, success, outsize: 32784
unique: 11, opcode: READ (15), nodeid: 3, insize: 80, pid: 16656
read[6] 4096 bytes from 14872702976 flags: 0x8000
   read[6] 4096 bytes from 14872702976
   unique: 11, success, outsize: 4112
unique: 12, opcode: READ (15), nodeid: 3, insize: 80, pid: 16656
read[6] 4096 bytes from 14872571904 flags: 0x8000
   read[6] 4096 bytes from 14872571904
   unique: 12, success, outsize: 4112
unique: 13, opcode: READ (15), nodeid: 3, insize: 80, pid: 16656
read[6] 4096 bytes from 14872674304 flags: 0x8000
   read[6] 4096 bytes from 14872674304
   unique: 13, success, outsize: 4112
unique: 14, opcode: READ (15), nodeid: 3, insize: 80, pid: 16656
read[6] 4096 bytes from 14872576000 flags: 0x8000
   read[6] 4096 bytes from 14872576000
   unique: 14, success, outsize: 4112
unique: 15, opcode: READ (15), nodeid: 3, insize: 80, pid: 16656
read[6] 4096 bytes from 14872502272 flags: 0x8000
   read[6] 4096 bytes from 14872502272
   unique: 15, success, outsize: 4112
unique: 16, opcode: READ (15), nodeid: 3, insize: 80, pid: 16656
read[6] 4096 bytes from 14872403968 flags: 0x8000
   read[6] 4096 bytes from 14872403968
   unique: 16, success, outsize: 4112
unique: 17, opcode: READ (15), nodeid: 3, insize: 80, pid: 16656
read[6] 4096 bytes from 14872358912 flags: 0x8000
   read[6] 4096 bytes from 14872358912
   unique: 17, success, outsize: 4112
unique: 18, opcode: READ (15), nodeid: 3, insize: 80, pid: 16656
read[6] 4096 bytes from 14872330240 flags: 0x8000
   read[6] 4096 bytes from 14872330240
   unique: 18, success, outsize: 4112
unique: 19, opcode: READ (15), nodeid: 3, insize: 80, pid: 16656
read[6] 4096 bytes from 14872240128 flags: 0x8000
   read[6] 4096 bytes from 14872240128
   unique: 19, success, outsize: 4112
unique: 20, opcode: READ (15), nodeid: 3, insize: 80, pid: 16656
read[6] 4096 bytes from 14872207360 flags: 0x8000

It seems like that is a read pattern that is jumping around quite a bit as my use is much more sequential reading for streaming media.

When you say it’s 4 times slower, what does that mean and how are you testing it?

You could try to reduce:
–vfs-read-chunk-size 16M

or something smaller if it’s jumping around so much as that may help.

When I run an executable contained within the ISO, it accesses a number of the files in the ISO (like shared libraries and such) which explains why it’s jumping around.

When I say 4 times slower, I mean executing the same command contained within the ISO takes 4 times longer with rclone mount than it does with s3fs.

If it’s 4 times longer and it takes 1 ms to complete, we are talking 4ms.

If it takes 15 seconds and it now takes a minute, that’s a big difference.

It’s helpful to give a measure to help understand the scope.

Sorry, 1 minute vs 4 minutes. Strangely various different values of --vfs-read-chunk-size (10M, 16M, 512M) don’t seem to make much of a difference.

Are you using any of the cache with the s3fs?

I don’t think I’m using any of the s3fs caching features. The command I’m using:

sudo s3fs my-images /mnt/images -o passwd_file=/etc/gcs-auth.txt,url=https://storage.googleapis.com,sigv2,nomultipart,allow_other

I’m pretty sure if you don’t specify use_cache it disables local file caching.

Why does rclone take 3 reads to read 16384 bytes when the chunk size is 134217728?

2019/04/19 17:16:50 DEBUG : &{myimage.iso (r)}: Read: len=16384, offset=0
2019/04/19 17:16:50 DEBUG : myimage.iso: ReadFileHandle.seek from 14872702976 to 0 (fs.RangeSeeker)
2019/04/19 17:16:50 DEBUG : myimage.iso: ChunkedReader.RangeSeek from 14872707072 to 0 length -1
2019/04/19 17:16:50 DEBUG : myimage.iso: ChunkedReader.Read at -1 length 4096 chunkOffset 0 chunkSize 134217728
2019/04/19 17:16:50 DEBUG : myimage.iso: ChunkedReader.openRange at 0 length 134217728
2019/04/19 17:16:50 DEBUG : myimage.iso: ChunkedReader.Read at 4096 length 8192 chunkOffset 0 chunkSize 134217728
2019/04/19 17:16:50 DEBUG : myimage.iso: ChunkedReader.Read at 12288 length 16384 chunkOffset 0 chunkSize 134217728
2019/04/19 17:16:50 DEBUG : myimage.iso: ChunkedReader.Read at 28672 length 32768 chunkOffset 0 chunkSize 134217728
2019/04/19 17:16:50 DEBUG : &{myimage.iso (r)}: >Read: read=16384, err=<nil>

It seems like s3fs uses only 1 read to perform this:

read[6] 16384 bytes from 0 flags: 0x8000
   read[6] 16384 bytes from 0
   unique: 9, success, outsize: 16400

It could be that they’re not logging each read but that’s one thing that sticks out to me in the logs.

This is curious too:

2019/04/19 17:16:51 DEBUG : &{myimage.iso (r)}: Read: len=4096, offset=14872674304
2019/04/19 17:16:51 DEBUG : myimage.iso: ChunkedReader.Read at 14872584192 length 16384 chunkOffset 14872571904 chunkSize 134217728
2019/04/19 17:16:51 DEBUG : myimage.iso: ChunkedReader.Read at 14872600576 length 32768 chunkOffset 14872571904 chunkSize 134217728
2019/04/19 17:16:51 DEBUG : myimage.iso: ChunkedReader.Read at 14872633344 length 65536 chunkOffset 14872571904 chunkSize 134217728
2019/04/19 17:16:51 DEBUG : myimage.iso: ChunkedReader.Read at 14872698880 length 131072 chunkOffset 14872571904 chunkSize 134217728
2019/04/19 17:16:51 DEBUG : myimage.iso: ReadFileHandle.seek from 14872576000 to 14872674304 (fs.RangeSeeker)
2019/04/19 17:16:51 DEBUG : myimage.iso: ChunkedReader.RangeSeek from 14872707072 to 14872674304 length -1
2019/04/19 17:16:51 DEBUG : myimage.iso: ChunkedReader.Read at -1 length 4096 chunkOffset 14872674304 chunkSize 134217728
2019/04/19 17:16:51 DEBUG : myimage.iso: ChunkedReader.openRange at 14872674304 length 134217728
2019/04/19 17:16:51 DEBUG : myimage.iso: ChunkedReader.Read at 14872678400 length 8192 chunkOffset 14872674304 chunkSize 134217728
2019/04/19 17:16:51 DEBUG : myimage.iso: ChunkedReader.Read at 14872686592 length 16384 chunkOffset 14872674304 chunkSize 134217728
2019/04/19 17:16:51 DEBUG : &{myimage.iso (r)}: >Read: read=4096, err=<nil>

rclone appears to be making a lot of calls to read these 4096 bytes. Where s3fs appears to do it in one step:

read[6] 4096 bytes from 14872674304 flags: 0x8000
   read[6] 4096 bytes from 14872674304
   unique: 13, success, outsize: 4112

4096 is a fuse thing as it’s hardcored for 4096.

The chunk size is how rclone requests ‘chunk’ of data from a provider.

Looking through the s3 code, it seems 4096 is there too.

Can you share the full debug log as well as that might help?

Complete rclone log:
https://drive.google.com/open?id=1DvLlQ3fxnnOm-BAsQN8SUn6yUj-0DiKk

Complete s3fs log:
https://drive.google.com/open?id=1uREzZMx6IBQ_pM0rjyl1CNrxHL-Pe3wH

In both cases:

  1. GCS was mounted
  2. The ISO was mounted off the GCS mount
  3. A program contained within the ISO was executed
  4. The ISO was unmounted
  5. GCS was unmounted

The rclone mount command used:

sudo rclone --config=$HOME/rclone.conf mount -vv --read-only gcs:my-images/Linux /mnt/images &> rclone.log

The s3fs command used:

sudo s3fs my-images /mnt/images -o passwd_file=/etc/gcs-auth.txt,url=https://storage.googleapis.com,sigv2,nomultipart,allow_other -d -d &> s3fs.log

Time required to execute the program on the ISO:

  • rclone = 8m46.076s (not sure why this has grown from ~4m)
  • s3fs = 1m2.061s

I found a more s3fs-specific log in syslog:
https://drive.google.com/open?id=1Jgl5EJKgDrxkpueHNFuCUNJtGbIAstb_

The one I posted before is only the FUSE log associated with s3fs.

I also turned on FUSE logging for rclone and collected that as well:
https://drive.google.com/open?id=1f0tUZyQGodth5mVjrCcCXQ1ZH9uMron0

I did a bit more of seek testing and I got better performance using a smaller chunksize.

Try your test using:

rclone mount gcrypt: /Test -vv --vfs-read-chunk-size 16M

No luck. 16M chunk size seems to have no noticeable affect on performance, which is weird. It actually took a little bit longer but that could be due to variation in network latency.

I ran:

sudo rclone --config=$HOME/rclone.conf mount -vv --vfs-read-chunk-size 16M --read-only gcs:my-images/Linux /mnt/images &> rclone-16M.log

Complete log:
https://drive.google.com/open?id=1hVUiW9mSKqi5Jbc0ZY4ANUBVjS3Glw_y

Looking at a small section at the beginning of the rclone vs. s3fs log, going from reading offset 14872608768 to reading 7552200704 takes rclone quite a bit longer:

offset rclone s3fs
14872608768 16:48:17 22:40:17
7552200704 16:48:57 22:40:42
duration 40 sec 25 sec

I don’t use S3 so I can’t really test anything on my side but the seek performance does seem slower compared to your other app.

@ncw - any thoughts on other things to look at?