Panic in hasher when mounting with vfs-cache and not synced data in the cache

Problem

Panic in hasher when mounting with vfs-cache and not synced data

Version

rclone v1.57.0-DEV

  • os/version: gentoo 2.8 (64 bit)
  • os/kernel: 5.14.14-gentoo-x86_64 (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.17.7
  • go/linking: dynamic
  • go/tags: none

Cloud storage

Blomp (using swift backend)

The command

rclone -vv  --transfers=8   --cache-dir=/storage/hot/rclone-cache/ --vfs-cache-mode full  --config ~/.config/rclone/blomp.conf mount blomp-arch-hash:/final-backup/ remote

The rclone config

[blomp-remote]
type = swift
user = CENSORED
key = CENSORED
auth = CENSORED
tenant = storage
auth_version = 2
leave_parts_on_error = true
chunk_size = 1P

[blomp-alias]
type = alias
remote = blomp-remote:CENSORED

[blomp-arch]
type = compress
remote = blomp-alias:arch
level = 9
ram_cache_limit = 130Mi

[blomp-arch-hash]
type = hasher
remote = blomp-arch:
hashes = md5,sha1,sha256

The log

2022/03/11 23:20:49 DEBUG : rclone: Version "v1.57.0-DEV" starting with parameters ["rclone" "-vv" "--transfers=8" "--cache-dir=/storage/hot/rclone-cache/" "--vfs-cache-mode" "full" "--config" "/root/.config/rclone/blomp.conf" "mount" "b
lomp-arch-hash:/final-backup/" "remote"]
2022/03/11 23:20:49 DEBUG : Creating backend with remote "blomp-arch-hash:/final-backup/"
2022/03/11 23:20:49 DEBUG : Using config file from "/root/.config/rclone/blomp.conf"
2022/03/11 23:20:49 INFO  : Hasher is EXPERIMENTAL!
2022/03/11 23:20:49 DEBUG : Creating backend with remote "blomp-arch:/final-backup"
2022/03/11 23:20:49 DEBUG : Creating backend with remote "blomp-remote:CENSORED/arch/final-backup.json"
2022/03/11 23:20:51 DEBUG : Creating backend with remote "blomp-remote:CENSORED/arch/final-backup"
2022/03/11 23:20:52 DEBUG : hasher::blomp-arch-hash:/final-backup/: Groups by usage: cached [md5, sha1, sha256], passed [md5], auto [md5, sha1, sha256], slow [], supported [md5, sha1, sha256]
2022/03/11 23:20:52 DEBUG : blomp-arch~hasher.bolt: Opened for reading in 56.866µs
2022/03/11 23:20:52 INFO  : hasher::blomp-arch-hash:/final-backup/: poll-interval is not supported by this remote
2022/03/11 23:20:52 DEBUG : vfs cache: root is "/storage/hot/rclone-cache"
2022/03/11 23:20:52 DEBUG : vfs cache: data root is "/storage/hot/rclone-cache/vfs/blomp-arch-hash/final-backup"
2022/03/11 23:20:52 DEBUG : vfs cache: metadata root is "/storage/hot/rclone-cache/vfsMeta/blomp-arch-hash/final-backup"
2022/03/11 23:20:52 DEBUG : Creating backend with remote "/storage/hot/rclone-cache/vfs/blomp-arch-hash//final-backup/"
2022/03/11 23:20:52 DEBUG : Creating backend with remote "/storage/hot/rclone-cache/vfsMeta/blomp-arch-hash//final-backup/

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x169c2c0]

goroutine 1 [running]:
github.com/rclone/rclone/backend/hasher.(*Object).Fs(0x1ee17aa)
        /var/tmp/portage/net-misc/rclone-1.57.0/work/rclone-1.57.0/backend/hasher/hasher.go:435
github.com/rclone/rclone/fs.Fingerprint({0x277a808, 0xc0000520e8}, {0x7ff41b699958, 0x0}, 0x0)
        /var/tmp/portage/net-misc/rclone-1.57.0/work/rclone-1.57.0/fs/fingerprint.go:24 +0x7d
github.com/rclone/rclone/vfs/vfscache.(*Item)._checkObject(0xc000ad0620, {0x27aec48, 0x0})
        /var/tmp/portage/net-misc/rclone-1.57.0/work/rclone-1.57.0/vfs/vfscache/item.go:787 +0xe9
github.com/rclone/rclone/vfs/vfscache.(*Item).open(0xc000ad0620, {0x27aec48, 0x0})
        /var/tmp/portage/net-misc/rclone-1.57.0/work/rclone-1.57.0/vfs/vfscache/item.go:519 +0x130
github.com/rclone/rclone/vfs/vfscache.(*Item).Open(0xc000ad0620, {0x27aec48, 0x0})
        /var/tmp/portage/net-misc/rclone-1.57.0/work/rclone-1.57.0/vfs/vfscache/item.go:490 +0xab
github.com/rclone/rclone/vfs/vfscache.(*Item).reload(0xc000ad0620, {0x277a7d0, 0xc000aaa6c0})
        /var/tmp/portage/net-misc/rclone-1.57.0/work/rclone-1.57.0/vfs/vfscache/item.go:742 +0xdc
github.com/rclone/rclone/vfs/vfscache.(*Cache).reload.func1({0xc000adab80, 0x3a}, {0x2788bd0, 0xc00073a5b0}, {0xc000af305b, 0x17})
        /var/tmp/portage/net-misc/rclone-1.57.0/work/rclone-1.57.0/vfs/vfscache/cache.go:506 +0x9c
github.com/rclone/rclone/vfs/vfscache.(*Cache).walk.func1({0xc000af3020, 0x52}, {0x2788bd0, 0xc00073a5b0}, {0x0, 0x0}) 
        /var/tmp/portage/net-misc/rclone-1.57.0/work/rclone-1.57.0/vfs/vfscache/cache.go:489 +0xac
path/filepath.walk({0xc000af3020, 0x52}, {0x2788bd0, 0xc00073a5b0}, 0xc000c07928)
        /usr/lib/go/src/path/filepath/path.go:418 +0x125
path/filepath.walk({0xc00096b090, 0x4f}, {0x2788bd0, 0xc000945860}, 0xc000c07928)
        /usr/lib/go/src/path/filepath/path.go:442 +0x28f
path/filepath.walk({0xc00096a640, 0x4c}, {0x2788bd0, 0xc000944f70}, 0xc000c07928)
        /usr/lib/go/src/path/filepath/path.go:442 +0x28f
path/filepath.walk({0xc00096a550, 0x49}, {0x2788bd0, 0xc000944ea0}, 0xc000c07928)
        /usr/lib/go/src/path/filepath/path.go:442 +0x28f
path/filepath.walk({0xc00096a460, 0x46}, {0x2788bd0, 0xc000944dd0}, 0xc000c07928)
        /usr/lib/go/src/path/filepath/path.go:442 +0x28f
path/filepath.walk({0xc00096a370, 0x43}, {0x2788bd0, 0xc000944d00}, 0xc000c07928)
        /usr/lib/go/src/path/filepath/path.go:442 +0x28f
path/filepath.walk({0xc000adaf00, 0x40}, {0x2788bd0, 0xc000944c30}, 0xc000c07928)
        /usr/lib/go/src/path/filepath/path.go:442 +0x28f
path/filepath.walk({0xc000adae40, 0x3d}, {0x2788bd0, 0xc000944b60}, 0xc000c07928)
        /usr/lib/go/src/path/filepath/path.go:442 +0x28f
path/filepath.walk({0xc000adab80, 0x3a}, {0x2788bd0, 0xc000944a90}, 0xc000c07928)
        /usr/lib/go/src/path/filepath/path.go:442 +0x28f
path/filepath.Walk({0xc000adab80, 0x3a}, 0xc000c07928)
        /usr/lib/go/src/path/filepath/path.go:505 +0x6c
github.com/rclone/rclone/vfs/vfscache.(*Cache).walk(0x30, {0xc000adab80, 0xc000c079e8}, 0xc9a674)
        /var/tmp/portage/net-misc/rclone-1.57.0/work/rclone-1.57.0/vfs/vfscache/cache.go:474 +0x5c
github.com/rclone/rclone/vfs/vfscache.(*Cache).reload(0xc0004326c0, {0x277a7d0, 0xc000aaa6c0})
        /var/tmp/portage/net-misc/rclone-1.57.0/work/rclone-1.57.0/vfs/vfscache/cache.go:500 +0x10f
github.com/rclone/rclone/vfs/vfscache.New({0x277a7d0, 0xc000aaa6c0}, {0x27aebd8, 0xc000760d80}, 0xc000300978, 0xc000ae6720)
        /var/tmp/portage/net-misc/rclone-1.57.0/work/rclone-1.57.0/vfs/vfscache/cache.go:131 +0x56d
github.com/rclone/rclone/vfs.(*VFS).SetCacheMode(0xc000300960, 0x3)
        /var/tmp/portage/net-misc/rclone-1.57.0/work/rclone-1.57.0/vfs/vfs.go:269 +0x134
github.com/rclone/rclone/vfs.New({0x27aebd8, 0xc000760d80}, 0xc0000d3280)
        /var/tmp/portage/net-misc/rclone-1.57.0/work/rclone-1.57.0/vfs/vfs.go:234 +0x5d9
github.com/rclone/rclone/cmd/mountlib.(*MountPoint).Mount(0xc0000d31e0)
        /var/tmp/portage/net-misc/rclone-1.57.0/work/rclone-1.57.0/cmd/mountlib/mount.go:247 +0xc7
github.com/rclone/rclone/cmd/mountlib.NewMountCommand.func1(0xc00058f180, {0xc0008cbea0, 0xa, 0x10})
        /var/tmp/portage/net-misc/rclone-1.57.0/work/rclone-1.57.0/cmd/mountlib/mount.go:176 +0x278
github.com/spf13/cobra.(*Command).execute(0xc00058f180, {0xc0008dae00, 0xa, 0x10})
        /var/tmp/portage/net-misc/rclone-1.57.0/homedir/go/pkg/mod/github.com/spf13/cobra@v1.2.1/command.go:860 +0x5f8 
github.com/spf13/cobra.(*Command).ExecuteC(0x34c5740)
        /var/tmp/portage/net-misc/rclone-1.57.0/homedir/go/pkg/mod/github.com/spf13/cobra@v1.2.1/command.go:974 +0x3bc 
github.com/spf13/cobra.(*Command).Execute(...)
        /var/tmp/portage/net-misc/rclone-1.57.0/homedir/go/pkg/mod/github.com/spf13/cobra@v1.2.1/command.go:902
github.com/rclone/rclone/cmd.Main()
        /var/tmp/portage/net-misc/rclone-1.57.0/work/rclone-1.57.0/cmd/cmd.go:555 +0x76
main.main()
        /var/tmp/portage/net-misc/rclone-1.57.0/work/rclone-1.57.0/rclone.go:14 +0x17

hello and welcome to the forum,

i see that you are using some unofficial custom compiled development version.

before we call this a bug, best to test using the official client
https://rclone.org/downloads/#script-download-and-install

also, i could be wrong but
for chunk_size = 1P, as per the docs, the largest size is 5 GiB

and that is a very complex setup

rclone mount -> hasher(experimental) -> compress(experimental) -> alias -> swift

The binary is build by Gentoo portage and is the official Gentoo package.
Gentoo Linux is source based distribution and everything is compiled locally.

Anyway this is the output of the official release:

# ~/rclone-official -vv  --transfers=8   --cache-dir=/storage/hot/rclone-cache/ --vfs-cache-mode full  --config ~/.config/rclone/blomp.conf mount blomp-arch-hash:/final-backup/ remote
2022/03/11 23:58:52 DEBUG : rclone: Version "v1.57.0" starting with parameters ["/root/rclone-official" "-vv" "--transfers=8" "--cache-dir=/storage/hot/rclone-cache/" "--vfs-cache-mode" "full" "--config" "/root/.config/rclone/blomp.conf" "mount" "blomp-arch-hash:/final-backup/" "remote"]
2022/03/11 23:58:52 DEBUG : Creating backend with remote "blomp-arch-hash:/final-backup/"
2022/03/11 23:58:52 DEBUG : Using config file from "/root/.config/rclone/blomp.conf"
2022/03/11 23:58:52 INFO  : Hasher is EXPERIMENTAL!
2022/03/11 23:58:52 DEBUG : Creating backend with remote "blomp-arch:/final-backup"
2022/03/11 23:58:52 DEBUG : Creating backend with remote "blomp-remote:CENSORED/arch/final-backup.json"
2022/03/11 23:58:58 DEBUG : Creating backend with remote "blomp-remote:CENSORED/arch/final-backup"
2022/03/11 23:59:00 DEBUG : hasher::blomp-arch-hash:/final-backup/: Groups by usage: cached [md5, sha1, sha256], passed [md5], auto [md5, sha1, sha256], slow [], supported [md5, sha1, sha256]
2022/03/11 23:59:00 DEBUG : blomp-arch~hasher.bolt: Opened for reading in 53.282µs
2022/03/11 23:59:00 INFO  : hasher::blomp-arch-hash:/final-backup/: poll-interval is not supported by this remote
2022/03/11 23:59:00 DEBUG : vfs cache: root is "/storage/hot/rclone-cache"
2022/03/11 23:59:00 DEBUG : vfs cache: data root is "/storage/hot/rclone-cache/vfs/blomp-arch-hash/final-backup"
2022/03/11 23:59:00 DEBUG : vfs cache: metadata root is "/storage/hot/rclone-cache/vfsMeta/blomp-arch-hash/final-backup"
2022/03/11 23:59:00 DEBUG : Creating backend with remote "/storage/hot/rclone-cache/vfs/blomp-arch-hash//final-backup/"
2022/03/11 23:59:00 DEBUG : Creating backend with remote "/storage/hot/rclone-cache/vfsMeta/blomp-arch-hash//final-backup/"
2022/03/11 23:59:00 DEBUG : blomp-arch~hasher.bolt: released
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0xdbb780]

goroutine 1 [running]:
github.com/rclone/rclone/backend/hasher.(*Object).Fs(0x15f26aa)
        github.com/rclone/rclone/backend/hasher/hasher.go:435
github.com/rclone/rclone/fs.Fingerprint({0x1e6bfe8, 0xc000128008}, {0x7f93e082f690, 0x0}, 0x0)
        github.com/rclone/rclone/fs/fingerprint.go:24 +0x7d
github.com/rclone/rclone/vfs/vfscache.(*Item)._checkObject(0xc00071c460, {0x1e9f8c8, 0x0})
        github.com/rclone/rclone/vfs/vfscache/item.go:787 +0xe9
github.com/rclone/rclone/vfs/vfscache.(*Item).open(0xc00071c460, {0x1e9f8c8, 0x0})
        github.com/rclone/rclone/vfs/vfscache/item.go:519 +0x130
github.com/rclone/rclone/vfs/vfscache.(*Item).Open(0xc00071c460, {0x1e9f8c8, 0x0})
        github.com/rclone/rclone/vfs/vfscache/item.go:490 +0xab
github.com/rclone/rclone/vfs/vfscache.(*Item).reload(0xc00071c460, {0x1e6bfb0, 0xc000802180})
        github.com/rclone/rclone/vfs/vfscache/item.go:742 +0xdc
github.com/rclone/rclone/vfs/vfscache.(*Cache).reload.func1({0xc00014b900, 0x3a}, {0x1e79bb0, 0xc0004a7110}, {0xc0009b4edb, 0x17})
        github.com/rclone/rclone/vfs/vfscache/cache.go:506 +0x9c
github.com/rclone/rclone/vfs/vfscache.(*Cache).walk.func1({0xc0009b4ea0, 0x52}, {0x1e79bb0, 0xc0004a7110}, {0x0, 0x0})
        github.com/rclone/rclone/vfs/vfscache/cache.go:489 +0xac
path/filepath.walk({0xc0009b4ea0, 0x52}, {0x1e79bb0, 0xc0004a7110}, 0xc000b07928)
        path/filepath/path.go:418 +0x125
path/filepath.walk({0xc000718f00, 0x4f}, {0x1e79bb0, 0xc0004a6750}, 0xc000b07928)
        path/filepath/path.go:442 +0x28f
path/filepath.walk({0xc0007184b0, 0x4c}, {0x1e79bb0, 0xc000336b60}, 0xc000b07928)
        path/filepath/path.go:442 +0x28f
path/filepath.walk({0xc0007183c0, 0x49}, {0x1e79bb0, 0xc0003121a0}, 0xc000b07928)
        path/filepath/path.go:442 +0x28f
path/filepath.walk({0xc0007182d0, 0x46}, {0x1e79bb0, 0xc0002e1ee0}, 0xc000b07928)
        path/filepath/path.go:442 +0x28f
path/filepath.walk({0xc0007181e0, 0x43}, {0x1e79bb0, 0xc0002e1a00}, 0xc000b07928)
        path/filepath/path.go:442 +0x28f
path/filepath.walk({0xc00014bf00, 0x40}, {0x1e79bb0, 0xc0003851e0}, 0xc000b07928)
        path/filepath/path.go:442 +0x28f
path/filepath.walk({0xc00014bd40, 0x3d}, {0x1e79bb0, 0xc000384b60}, 0xc000b07928)
        path/filepath/path.go:442 +0x28f
path/filepath.walk({0xc00014b900, 0x3a}, {0x1e79bb0, 0xc000384a90}, 0xc000b07928)
        path/filepath/path.go:442 +0x28f
path/filepath.Walk({0xc00014b900, 0x3a}, 0xc000b07928)
        path/filepath/path.go:505 +0x6c
github.com/rclone/rclone/vfs/vfscache.(*Cache).walk(0x30, {0xc00014b900, 0xc000b079e8}, 0x40ce34)
        github.com/rclone/rclone/vfs/vfscache/cache.go:474 +0x5c
github.com/rclone/rclone/vfs/vfscache.(*Cache).reload(0xc0005ee540, {0x1e6bfb0, 0xc000802180})
        github.com/rclone/rclone/vfs/vfscache/cache.go:500 +0x10f
github.com/rclone/rclone/vfs/vfscache.New({0x1e6bfb0, 0xc000802180}, {0x1e9f858, 0xc0005de090}, 0xc0002f6888, 0xc000208730)
        github.com/rclone/rclone/vfs/vfscache/cache.go:131 +0x56d
github.com/rclone/rclone/vfs.(*VFS).SetCacheMode(0xc0002f6870, 0x3)
        github.com/rclone/rclone/vfs/vfs.go:269 +0x134
github.com/rclone/rclone/vfs.New({0x1e9f858, 0xc0005de090}, 0xc0005da200)
        github.com/rclone/rclone/vfs/vfs.go:234 +0x5d9
github.com/rclone/rclone/cmd/mountlib.(*MountPoint).Mount(0xc0005da160)
        github.com/rclone/rclone/cmd/mountlib/mount.go:247 +0xc7
github.com/rclone/rclone/cmd/mountlib.NewMountCommand.func1(0xc0003d1400, {0xc00048fcb0, 0x9, 0xe})
        github.com/rclone/rclone/cmd/mountlib/mount.go:176 +0x278
github.com/spf13/cobra.(*Command).execute(0xc0003d1400, {0xc00042e1c0, 0x9, 0xe})
        github.com/spf13/cobra@v1.2.1/command.go:860 +0x5f8
github.com/spf13/cobra.(*Command).ExecuteC(0x2bdede0)
        github.com/spf13/cobra@v1.2.1/command.go:974 +0x3bc
github.com/spf13/cobra.(*Command).Execute(...)
        github.com/spf13/cobra@v1.2.1/command.go:902
github.com/rclone/rclone/cmd.Main()
        github.com/rclone/rclone/cmd/cmd.go:555 +0x76
main.main()
        github.com/rclone/rclone/rclone.go:14 +0x17

The data is split into 64MiB files so it doesn't really matter
The chunk size is as suggested by Blomp apparently the backend doesn't really do chunking and relies on rclone for that

The data is 4TiB file system image split into 64 MiB files compress is because the image is half empty hasher is because Blomp documentation suggests that they don't do hashing properly.

It looks more or less the same with the official release as seen in my comment

ok, now that you are using the official client,

--- need to change chunk_size to a valid value.
--- post the updated config file.
--- post a new debug log.

then we have something to work with.

I suppose removing the chunk-size from the config is enough.

# cat ~/.config/rclone/blomp-new.conf
[blomp-remote]
type = swift
user = CENSORED
key = CENSORED
auth = CENSORED
tenant = storage
auth_version = 2
leave_parts_on_error = true

[blomp-alias]
type = alias
remote = blomp-remote:CENSORED

[blomp-arch]
type = compress
remote = blomp-alias:arch
level = 9
ram_cache_limit = 130Mi

[blomp-arch-hash]
type = hasher
remote = blomp-arch:
hashes = md5,sha1,sha256

# ~/rclone-official -vv  --transfers=8   --cache-dir=/storage/hot/rclone-cache/ --vfs-cache-mode full  --config ~/.config/rclone/blomp-new.conf mount blomp-arch-hash:/final-backup/ remote
2022/03/12 00:39:07 DEBUG : rclone: Version "v1.57.0" starting with parameters ["/root/rclone-official" "-vv" "--transfers=8" "--cache-dir=/storage/hot/rclone-cache/" "--vfs-cache-mode" "full" "--config" "/root/.config/rclone/blomp-new.conf" "mount" "blomp-arch-hash:/final-backup/" "remote"]
2022/03/12 00:39:07 DEBUG : Creating backend with remote "blomp-arch-hash:/final-backup/"
2022/03/12 00:39:07 DEBUG : Using config file from "/root/.config/rclone/blomp-new.conf"
2022/03/12 00:39:07 INFO  : Hasher is EXPERIMENTAL!
2022/03/12 00:39:07 DEBUG : Creating backend with remote "blomp-arch:/final-backup"
2022/03/12 00:39:07 DEBUG : Creating backend with remote "blomp-remote:CENSORED/arch/final-backup.json"
2022/03/12 00:39:08 DEBUG : Creating backend with remote "blomp-remote:CENSORED/arch/final-backup"
2022/03/12 00:39:10 DEBUG : hasher::blomp-arch-hash:/final-backup/: Groups by usage: cached [md5, sha1, sha256], passed [md5], auto [md5, sha1, sha256], slow [], supported [md5, sha1, sha256]
2022/03/12 00:39:10 DEBUG : blomp-arch~hasher.bolt: Opened for reading in 47.009µs
2022/03/12 00:39:10 INFO  : hasher::blomp-arch-hash:/final-backup/: poll-interval is not supported by this remote
2022/03/12 00:39:10 DEBUG : vfs cache: root is "/storage/hot/rclone-cache"
2022/03/12 00:39:10 DEBUG : vfs cache: data root is "/storage/hot/rclone-cache/vfs/blomp-arch-hash/final-backup"
2022/03/12 00:39:10 DEBUG : vfs cache: metadata root is "/storage/hot/rclone-cache/vfsMeta/blomp-arch-hash/final-backup"
2022/03/12 00:39:10 DEBUG : Creating backend with remote "/storage/hot/rclone-cache/vfs/blomp-arch-hash//final-backup/"
2022/03/12 00:39:10 DEBUG : Creating backend with remote "/storage/hot/rclone-cache/vfsMeta/blomp-arch-hash//final-backup/"
2022/03/12 00:39:10 DEBUG : blomp-arch~hasher.bolt: released
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0xdbb780]

goroutine 1 [running]:
github.com/rclone/rclone/backend/hasher.(*Object).Fs(0x15f26aa)
        github.com/rclone/rclone/backend/hasher/hasher.go:435
github.com/rclone/rclone/fs.Fingerprint({0x1e6bfe8, 0xc0000500b8}, {0x7f2c7ecde258, 0x0}, 0x0)
        github.com/rclone/rclone/fs/fingerprint.go:24 +0x7d
github.com/rclone/rclone/vfs/vfscache.(*Item)._checkObject(0xc000a30540, {0x1e9f8c8, 0x0})
        github.com/rclone/rclone/vfs/vfscache/item.go:787 +0xe9
github.com/rclone/rclone/vfs/vfscache.(*Item).open(0xc000a30540, {0x1e9f8c8, 0x0})
        github.com/rclone/rclone/vfs/vfscache/item.go:519 +0x130
github.com/rclone/rclone/vfs/vfscache.(*Item).Open(0xc000a30540, {0x1e9f8c8, 0x0})
        github.com/rclone/rclone/vfs/vfscache/item.go:490 +0xab
github.com/rclone/rclone/vfs/vfscache.(*Item).reload(0xc000a30540, {0x1e6bfb0, 0xc000930780})
        github.com/rclone/rclone/vfs/vfscache/item.go:742 +0xdc
github.com/rclone/rclone/vfs/vfscache.(*Cache).reload.func1({0xc00024cd00, 0x3a}, {0x1e79bb0, 0xc000243790}, {0xc0003e9a1b, 0x17})
        github.com/rclone/rclone/vfs/vfscache/cache.go:506 +0x9c
github.com/rclone/rclone/vfs/vfscache.(*Cache).walk.func1({0xc0003e99e0, 0x52}, {0x1e79bb0, 0xc000243790}, {0x0, 0x0})
        github.com/rclone/rclone/vfs/vfscache/cache.go:489 +0xac
path/filepath.walk({0xc0003e99e0, 0x52}, {0x1e79bb0, 0xc000243790}, 0xc000d07928)
        path/filepath/path.go:418 +0x125
path/filepath.walk({0xc000223540, 0x4f}, {0x1e79bb0, 0xc0002428f0}, 0xc000d07928)
        path/filepath/path.go:442 +0x28f
path/filepath.walk({0xc000222780, 0x4c}, {0x1e79bb0, 0xc0001151e0}, 0xc000d07928)
        path/filepath/path.go:442 +0x28f
path/filepath.walk({0xc000222690, 0x49}, {0x1e79bb0, 0xc000115040}, 0xc000d07928)
        path/filepath/path.go:442 +0x28f
path/filepath.walk({0xc000222550, 0x46}, {0x1e79bb0, 0xc000934820}, 0xc000d07928)
        path/filepath/path.go:442 +0x28f
path/filepath.walk({0xc000222460, 0x43}, {0x1e79bb0, 0xc0008afc70}, 0xc000d07928)
        path/filepath/path.go:442 +0x28f
path/filepath.walk({0xc00024d080, 0x40}, {0x1e79bb0, 0xc00073fd40}, 0xc000d07928)
        path/filepath/path.go:442 +0x28f
path/filepath.walk({0xc00024cfc0, 0x3d}, {0x1e79bb0, 0xc000739040}, 0xc000d07928)
        path/filepath/path.go:442 +0x28f
path/filepath.walk({0xc00024cd00, 0x3a}, {0x1e79bb0, 0xc000723c70}, 0xc000d07928)
        path/filepath/path.go:442 +0x28f
path/filepath.Walk({0xc00024cd00, 0x3a}, 0xc000d07928)
        path/filepath/path.go:505 +0x6c
github.com/rclone/rclone/vfs/vfscache.(*Cache).walk(0x30, {0xc00024cd00, 0xc000d079e8}, 0x40ce34)
        github.com/rclone/rclone/vfs/vfscache/cache.go:474 +0x5c
github.com/rclone/rclone/vfs/vfscache.(*Cache).reload(0xc000756840, {0x1e6bfb0, 0xc000930780})
        github.com/rclone/rclone/vfs/vfscache/cache.go:500 +0x10f
github.com/rclone/rclone/vfs/vfscache.New({0x1e6bfb0, 0xc000930780}, {0x1e9f858, 0xc000a342d0}, 0xc0002f2978, 0xc0001b0d30)
        github.com/rclone/rclone/vfs/vfscache/cache.go:131 +0x56d
github.com/rclone/rclone/vfs.(*VFS).SetCacheMode(0xc0002f2960, 0x3)
        github.com/rclone/rclone/vfs/vfs.go:269 +0x134
github.com/rclone/rclone/vfs.New({0x1e9f858, 0xc000a342d0}, 0xc000a26780)
        github.com/rclone/rclone/vfs/vfs.go:234 +0x5d9
github.com/rclone/rclone/cmd/mountlib.(*MountPoint).Mount(0xc000a266e0)
        github.com/rclone/rclone/cmd/mountlib/mount.go:247 +0xc7
github.com/rclone/rclone/cmd/mountlib.NewMountCommand.func1(0xc0002f4000, {0xc0008be120, 0x9, 0xe})
        github.com/rclone/rclone/cmd/mountlib/mount.go:176 +0x278
github.com/spf13/cobra.(*Command).execute(0xc0002f4000, {0xc0003660e0, 0x9, 0xe})
        github.com/spf13/cobra@v1.2.1/command.go:860 +0x5f8
github.com/spf13/cobra.(*Command).ExecuteC(0x2bdede0)
        github.com/spf13/cobra@v1.2.1/command.go:974 +0x3bc
github.com/spf13/cobra.(*Command).Execute(...)
        github.com/spf13/cobra@v1.2.1/command.go:902
github.com/rclone/rclone/cmd.Main()
        github.com/rclone/rclone/cmd/cmd.go:555 +0x76
main.main()
        github.com/rclone/rclone/rclone.go:14 +0x17

ok, that all looks good. thanks

with that, one of the forums experts will take a look, hopefully sooner than later.

in the mean time, if it were me, i would simplify the setup, to narrow down the bug.
i would remove the experimental compress remote.
perhaps change rclone mount to rclone ls/copy

Unfortunately neither of those are something I'm willing to do.
First I have 340 GiB in the vfs cache and I don't want to risk damaging it
Second I have already uploaded ~300GiB and removing compression will require me to reupload them.

Also removing compression removes the need for rclone since the software provided by the backend is perfectly capable of uploading files without compression.

And here is the fix for my case.
Looking at the code there are probably more cases where the result wrapOpbject is not the right thing to return directly.

diff --git a/backend/hasher/hasher.go b/backend/hasher/hasher.go
--- a/backend/hasher/hasher.go	(revision 169990e270b2977c39bd6ecd8a2921cf30a6d2b7)
+++ b/backend/hasher/hasher.go	(date 1647067536954)
@@ -410,6 +410,9 @@
 // NewObject finds the Object at remote.
 func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
 	o, err := f.Fs.NewObject(ctx, remote)
+	if err != nil {
+		return nil, err
+	}
 	return f.wrapObject(o, err), err
 }
 

Hmm, that is one of the Go gotchas.

I've had a go at fixing this properly by changing the interface of wrapObject which will fix all the cases.

Can you give it a go please?

v1.58.0-beta.6029.83ea146ea.fix-hasher-crash on branch fix-hasher-crash (uploaded in 15-30 mins)

I can't download the build because the server closes the connection at 0 bytes.

however I've build it my self and it still crashes

# ~/rclone-hasher-fix-beta -vv  --transfers=8   --cache-dir=/storage/hot/rclone-cache/ --vfs-cache-mode full  --config ~/.config/rclone/blomp.conf mount blomp-arch-hash:/not-so-finalbackup/ remote
2022/03/12 17:49:24 DEBUG : rclone: Version "v1.58.0-DEV" starting with parameters ["/root/rclone-hasher-fix-beta" "-vv" "--transfers=8" "--cache-dir=/storage/hot/rclone-cache/" "--vfs-cache-mode" "full" "--config" "/root/.config/rclone/blomp.conf" "mount" "blomp-arch-hash:/not-so-finalbackup/" "remote"]
2022/03/12 17:49:24 DEBUG : Creating backend with remote "blomp-arch-hash:/not-so-finalbackup/"
2022/03/12 17:49:24 DEBUG : Using config file from "/root/.config/rclone/blomp.conf"
2022/03/12 17:49:24 INFO  : Hasher is EXPERIMENTAL!
2022/03/12 17:49:24 DEBUG : Creating backend with remote "blomp-arch:/not-so-finalbackup"
2022/03/12 17:49:24 DEBUG : Creating backend with remote "blomp-remote:[REDACTED]/arch/not-so-finalbackup.json"
2022/03/12 17:49:25 DEBUG : Creating backend with remote "blomp-remote:[REDACTED]/arch/not-so-finalbackup"
2022/03/12 17:49:26 DEBUG : hasher::blomp-arch-hash:/not-so-finalbackup/: Groups by usage: cached [md5, sha1, sha256], passed [md5], auto [md5, sha1, sha256], slow [], supported [md5, sha1, sha256]
2022/03/12 17:49:26 DEBUG : blomp-arch~hasher.bolt: Opened for reading in 57.265µs
2022/03/12 17:49:26 INFO  : hasher::blomp-arch-hash:/not-so-finalbackup/: poll-interval is not supported by this remote
2022/03/12 17:49:26 DEBUG : vfs cache: root is "/storage/hot/rclone-cache"
2022/03/12 17:49:26 DEBUG : vfs cache: data root is "/storage/hot/rclone-cache/vfs/blomp-arch-hash/not-so-finalbackup"
2022/03/12 17:49:26 DEBUG : vfs cache: metadata root is "/storage/hot/rclone-cache/vfsMeta/blomp-arch-hash/not-so-finalbackup"
2022/03/12 17:49:26 DEBUG : Creating backend with remote "/storage/hot/rclone-cache/vfs/blomp-arch-hash//not-so-finalbackup/"
2022/03/12 17:49:26 DEBUG : Creating backend with remote "/storage/hot/rclone-cache/vfsMeta/blomp-arch-hash//not-so-finalbackup/"
2022/03/12 17:49:26 DEBUG : blomp-arch~hasher.bolt: released
2022/03/12 17:49:26 DEBUG : 00/00/00/00/00/00/00/31: pass md5
2022/03/12 17:49:26 DEBUG : 00/00/00/00/00/00/00/31: vfs cache: checking remote fingerprint "67108864,2022-03-09 07:13:15.285139768 +0000 UTC,f5251d796baa0c242e0a4c596da0ddc6" against cached fingerprint ""
2022/03/12 17:49:26 DEBUG : 00/00/00/00/00/00/00/31: vfs cache: truncate to size=67108864
2022/03/12 17:49:26 DEBUG : vfs cache: looking for range={Pos:0 Size:67108864} in [{Pos:0 Size:67108864}] - present true
2022/03/12 17:49:26 DEBUG : 00/00/00/00/00/00/00/31: vfs cache: setting modification time to 2022-03-09 07:13:15.285139768 +0000 UTC
2022/03/12 17:49:26 INFO  : 00/00/00/00/00/00/00/31: vfs cache: queuing for upload in 5s
2022/03/12 17:49:27 DEBUG : 00/00/00/00/00/00/00: Added virtual directory entry vAddFile: "31"
2022/03/12 17:49:28 DEBUG : 00/00/00/00/00/00/00/39: pass md5
2022/03/12 17:49:28 DEBUG : 00/00/00/00/00/00/00/39: vfs cache: checking remote fingerprint "67108864,2022-03-09 07:13:15.285139768 +0000 UTC,3bf4b0274b052aec3dec9e37c0a64011" against cached fingerprint ""
2022/03/12 17:49:28 DEBUG : 00/00/00/00/00/00/00/39: vfs cache: truncate to size=67108864
2022/03/12 17:49:28 DEBUG : vfs cache: looking for range={Pos:0 Size:67108864} in [{Pos:0 Size:67108864}] - present true
2022/03/12 17:49:28 DEBUG : 00/00/00/00/00/00/00/39: vfs cache: setting modification time to 2022-03-09 07:13:15.285139768 +0000 UTC
2022/03/12 17:49:28 INFO  : 00/00/00/00/00/00/00/39: vfs cache: queuing for upload in 5s
2022/03/12 17:49:28 DEBUG : 00/00/00/00/00/00/00: Added virtual directory entry vAddFile: "39"
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x16afa00]

goroutine 1 [running]:
github.com/rclone/rclone/backend/hasher.(*Object).Fs(0xc000bed9e0)
        /home/[REDACTED]/[REDACTED]/rclone/backend/hasher/hasher.go:442
github.com/rclone/rclone/fs.Fingerprint({0x2836aa8, 0xc000122008}, {0x7fb44c498998, 0x0}, 0x0)
        /home/[REDACTED]/[REDACTED]/rclone/fs/fingerprint.go:24 +0x7d
github.com/rclone/rclone/vfs/vfscache.(*Item)._checkObject(0xc000be3180, {0x286c8e8, 0x0})
        /home/[REDACTED]/[REDACTED]/rclone/vfs/vfscache/item.go:787 +0xf6
github.com/rclone/rclone/vfs/vfscache.(*Item).open(0xc000be3180, {0x286c8e8, 0x0})
        /home/[REDACTED]/[REDACTED]/rclone/vfs/vfscache/item.go:519 +0x1b8
github.com/rclone/rclone/vfs/vfscache.(*Item).Open(0xc000be3180, {0x286c8e8, 0x0})
        /home/[REDACTED]/[REDACTED]/rclone/vfs/vfscache/item.go:490 +0xab
github.com/rclone/rclone/vfs/vfscache.(*Item).reload(0xc000be3180, {0x2836a70, 0xc000a0e700})
        /home/[REDACTED]/[REDACTED]/rclone/vfs/vfscache/item.go:742 +0xdc
github.com/rclone/rclone/vfs/vfscache.(*Cache).reload.func1({0xc000c67140, 0x40}, {0x2844b20, 0xc000e02680}, {0xc0009b7421, 0x17})
        /home/[REDACTED]/[REDACTED]/rclone/vfs/vfscache/cache.go:529 +0x9c
github.com/rclone/rclone/vfs/vfscache.(*Cache).walk.func1({0xc0009b73e0, 0x58}, {0x2844b20, 0xc000e02680}, {0x0, 0x0})
        /home/[REDACTED]/[REDACTED]/rclone/vfs/vfscache/cache.go:512 +0xec
path/filepath.walk({0xc0009b73e0, 0x58}, {0x2844b20, 0xc000e02680}, 0xc0009af910)
        /usr/lib/go/src/path/filepath/path.go:418 +0x125
path/filepath.walk({0xc00039fec0, 0x55}, {0x2844b20, 0xc000d80410}, 0xc0009af910)
        /usr/lib/go/src/path/filepath/path.go:442 +0x28f
path/filepath.walk({0xc00039fda0, 0x52}, {0x2844b20, 0xc000d80340}, 0xc0009af910)
        /usr/lib/go/src/path/filepath/path.go:442 +0x28f
path/filepath.walk({0xc0009ef0e0, 0x4f}, {0x2844b20, 0xc000d80270}, 0xc0009af910)
        /usr/lib/go/src/path/filepath/path.go:442 +0x28f
path/filepath.walk({0xc0009eeff0, 0x4c}, {0x2844b20, 0xc000d801a0}, 0xc0009af910)
        /usr/lib/go/src/path/filepath/path.go:442 +0x28f
path/filepath.walk({0xc0009eef00, 0x49}, {0x2844b20, 0xc000d800d0}, 0xc0009af910)
        /usr/lib/go/src/path/filepath/path.go:442 +0x28f
path/filepath.walk({0xc0009eee10, 0x46}, {0x2844b20, 0xc000d80000}, 0xc0009af910)
        /usr/lib/go/src/path/filepath/path.go:442 +0x28f
path/filepath.walk({0xc0009eed20, 0x43}, {0x2844b20, 0xc000b09ee0}, 0xc0009af910)
        /usr/lib/go/src/path/filepath/path.go:442 +0x28f
path/filepath.walk({0xc000c67140, 0x40}, {0x2844b20, 0xc000b09e10}, 0xc0009af910)
        /usr/lib/go/src/path/filepath/path.go:442 +0x28f
path/filepath.Walk({0xc000c67140, 0x40}, 0xc0009af910)
        /usr/lib/go/src/path/filepath/path.go:505 +0x6c
github.com/rclone/rclone/vfs/vfscache.(*Cache).walk(0x30, {0xc000c67140, 0xc0009af9d0}, 0xca4674)
        /home/[REDACTED]/[REDACTED]/rclone/vfs/vfscache/cache.go:497 +0x5c
github.com/rclone/rclone/vfs/vfscache.(*Cache).reload(0xc0009d6600, {0x2836a70, 0xc000a0e700})
        /home/[REDACTED]/[REDACTED]/rclone/vfs/vfscache/cache.go:523 +0x10f
github.com/rclone/rclone/vfs/vfscache.New({0x2836a70, 0xc000a0e700}, {0x286c878, 0xc00080a360}, 0xc0004ae978, 0xc0009bea30)
        /home/[REDACTED]/[REDACTED]/rclone/vfs/vfscache/cache.go:132 +0x56d
github.com/rclone/rclone/vfs.(*VFS).SetCacheMode(0xc0004ae960, 0x3)
        /home/[REDACTED]/[REDACTED]/rclone/vfs/vfs.go:296 +0x134
github.com/rclone/rclone/vfs.New({0x286c878, 0xc00080a360}, 0xc0009de230)
        /home/[REDACTED]/[REDACTED]/rclone/vfs/vfs.go:235 +0x5d9
github.com/rclone/rclone/cmd/mountlib.(*MountPoint).Mount(0xc0009de180)
        /home/[REDACTED]/[REDACTED]/rclone/cmd/mountlib/mount.go:250 +0x11d
github.com/rclone/rclone/cmd/mountlib.NewMountCommand.func1(0xc000266000, {0xc0006ccfc0, 0x9, 0xe})
        /home/[REDACTED]/[REDACTED]/rclone/cmd/mountlib/mount.go:178 +0x278
github.com/spf13/cobra.(*Command).execute(0xc000266000, {0xc000440460, 0x9, 0xe})
        /home/[REDACTED]/go/pkg/mod/github.com/spf13/cobra@v1.2.1/command.go:860 +0x5f8
github.com/spf13/cobra.(*Command).ExecuteC(0x35c9880)
        /home/[REDACTED]/go/pkg/mod/github.com/spf13/cobra@v1.2.1/command.go:974 +0x3bc
github.com/spf13/cobra.(*Command).Execute(...)
        /home/[REDACTED]/go/pkg/mod/github.com/spf13/cobra@v1.2.1/command.go:902
github.com/rclone/rclone/cmd.Main()
        /home/[REDACTED]/[REDACTED]/rclone/cmd/cmd.go:553 +0x76
main.main()
        /home/[REDACTED]/[REDACTED]/rclone/rclone.go:14 +0x17

:frowning:

I tried to reproduce your problem but I haven't been able to yet.

Can you give step by step instructions to reproduce please?

nothing special.
mount an empty target
start coping several hundreds GiBs
stop the copy around ~300 GiB at this point usually we have around 200 GiB in the vfs cache
unmount it.

try to mount it again it crashes.

Thanks for that.

I managed to replicate the problem by killing the mount when it was uploading a 100MB file. It then failed on restart as you suggested.

I can see what was wrong with my original fix now - please give this a go

v1.58.0-beta.6029.97a9f5af5.fix-hasher-crash on branch fix-hasher-crash (uploaded in 15-30 mins)

Thank you

Sorry for the delay,

I can confirm that it works without problems now.

1 Like

Thanks for testing.

I've merged this to master now which means it will be in the latest beta in 15-30 minutes and released in v1.58

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.