Error when trying to resume upload with rclone mount if there is a file with long name

What is the problem you are having with rclone?

I am getting an error when trying to resume upload with rclone mount if there is a file with long name on B2
Steps to reproduce:

  1. Create a B2 bucket
  2. Create a rclone config for the bucket (i use the s3 interface, but the bug is also reproducible with the b2 interface)
  3. Mount it using the following command:
winpty rclone mount -vv --log-file=rclone-log-1.log b2s3:test-rclone-bucket --bwlimit=1500k --vfs-cache-mode writes --network-mode T:
  1. Copy a directory containing the following files (file size is ~64MB):
$ find test
test
test/5a9924d117d9d7923cb5ab289ffe2399b42d829e7f1926cd34abaa17c5205bec
  1. Wait a few seconds, cancel rclone using Ctrl+C
  2. Start it again:
winpty rclone mount -vv --log-file=rclone-log-2.log b2s3:test-rclone-bucket --bwlimit=1500k --vfs-cache-mode writes --network-mode T:

Expected result: no errors, the file appears in windows explorer (under drive T:)
Actual result: errors in the console, no file in windows explorer, but rclone is uploading the file (according to Task Manager). After a some time the file appears in network mount.

2023/04/10 22:52:05 ERROR : test/5a9924d117d9d7923cb5ab289ffe2399b42d829e7f1926cd34abaa17c5205bec: vfs cache: failed to reload item: reload: failed to add v
irtual dir entry: file does not exist

Run the command 'rclone version' and share the full output of the command.

rclone v1.62.2

  • os/version: Microsoft Windows 10 Home 21H2 (64 bit)
  • os/kernel: 10.0.19044.2728 Build 19044.2728.2728 (x86_64)
  • os/type: windows
  • os/arch: amd64
  • go/version: go1.20.2
  • go/linking: static
  • go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

Backblaze b2

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone mount b2s3:test-rclone-bucket --bwlimit=1500k --vfs-cache-mode writes --network-mode T:

The rclone config contents with secrets removed.

[b2s3]
type = s3
provider = Other
access_key_id = 
secret_access_key = 
endpoint = s3.eu-central-003.backblazeb2.com

A log from the command with the -vv flag

This is a bug I've fixed already but not merged the fix yet.

Can you give this a go please?

v1.63.0-beta.6942.e9758fc6f.fix-vfs-empty-dirs on branch fix-vfs-empty-dirs (uploaded in 15-30 mins)

I have tried the new version, it looks like the bug is fixed. I've updated the gist with new logs, if you are interested in them.
Thanks you.

Thank you for testing. I need to write some tests for the fix as it is fairly complicated and then merge it!

I finished this fix off and I've merged this to master now which means it will be in the latest beta in 15-30 minutes and released in v1.63

It will be in this beta (and future ones) if you want to give it a test:

v1.63.0-beta.6958.9a9ef040e on branch master (uploaded in 15-30 mins)

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.