ERROR: Statfs failed: bucket or container name is needed in remote

What is the problem you are having with rclone?

Getting error log while mounting via SMB:

ERROR : smb://user@nas.local:445/: Statfs failed: bucket or container name is needed in remote

Run the command 'rclone version' and share the full output of the command.

rclone v1.63.0
- os/version: Microsoft Windows 10 Pro for Workstations 21H2 (64 bit)
- os/kernel: 10.0.19044.3086 (x86_64)
- os/type: windows
- os/arch: amd64
- go/version: go1.20.5
- go/linking: static
- go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

Model name: DS415play
Current DSM version: DSM 7.2.0
This is the latest version available based on your current DSM configurations.

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone mount NAS:/ N: --cache-dir D:/Temp --vfs-cache-mode off

The rclone config contents with secrets removed.

[NAS]
type = smb
host = nas.local
user = user
pass = ...
hide_special_share = false

A log from the command with the -vv flag

>> rclone mount NAS:/ N: --cache-dir D:/Temp --vfs-cache-mode off
2023/07/05 15:02:21 DEBUG : rclone: Version "v1.63.0" starting with parameters ["rclone" "mount" "NAS:/" "N:" "--cache-dir" "D:/Temp" "--vfs-cache-mode" "off" "-vv"]
2023/07/05 15:02:21 DEBUG : Creating backend with remote "NAS:/"
2023/07/05 15:02:21 DEBUG : Using config file from "C:\\Users\\user\\AppData\\Roaming\\rclone\\rclone.conf"
2023/07/05 15:02:21 DEBUG : fs cache: renaming cache item "NAS:/" to be canonical "test:"
2023/07/05 15:02:21 INFO  : smb://user@nas.local:445/: poll-interval is not supported by this remote
2023/07/05 15:02:21 DEBUG : Network mode mounting is disabled
2023/07/05 15:02:21 DEBUG : Mounting on "N:" ("NAS")
2023/07/05 15:02:21 DEBUG : smb://user@nas.local:445/: Mounting with options: ["-o" "attr_timeout=1" "-o" "uid=-1" "-o" "gid=-1" "--FileSystemName=rclone" "-o" "volname=NAS"]
2023/07/05 15:02:21 DEBUG : smb://user@nas.local:445/: Init:
2023/07/05 15:02:21 DEBUG : smb://user@nas.local:445/: >Init:
2023/07/05 15:02:21 DEBUG : /: Statfs:
2023/07/05 15:02:21 ERROR : smb://user@nas.local:445/: Statfs failed: bucket or container name is needed in remote
2023/07/05 15:02:21 DEBUG : /: >Statfs: stat={Bsize:4096 Frsize:4096 Blocks:8796093022207 Bfree:8796093022207 Bavail:8796093022207 Files:1000000000 Ffree:1000000000 Favail:0 Fsid:0 Flag:0 Namemax:255}, errc=0
2023/07/05 15:02:21 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2023/07/05 15:02:21 DEBUG : /: >Getattr: errc=0
2023/07/05 15:02:21 DEBUG : /: Readlink:
2023/07/05 15:02:21 DEBUG : /: >Readlink: linkPath="", errc=-40
2023/07/05 15:02:21 DEBUG : /: Getxattr: name="non-existant-a11ec902d22f4ec49003af15282d3b00"
2023/07/05 15:02:21 DEBUG : /: >Getxattr: errc=-40, value=""
The service rclone has been started.
2023/07/05 15:02:21 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2023/07/05 15:02:21 DEBUG : /: >Getattr: errc=0
2023/07/05 15:02:21 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2023/07/05 15:02:21 DEBUG : /: >Getattr: errc=0
2023/07/05 15:02:21 DEBUG : /: Opendir:
2023/07/05 15:02:21 DEBUG : /: OpenFile: flags=O_RDONLY, perm=-rwxrwxrwx
2023/07/05 15:02:21 DEBUG : /: >OpenFile: fd=/ (r), err=<nil>
2023/07/05 15:02:21 DEBUG : /: >Opendir: errc=0, fh=0x0
2023/07/05 15:02:21 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2023/07/05 15:02:21 DEBUG : /: >Getattr: errc=0
2023/07/05 15:02:21 DEBUG : /: Getpath: Getpath fh=0
2023/07/05 15:02:21 DEBUG : /: >Getpath: errc=0, normalisedPath=""
2023/07/05 15:02:21 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2023/07/05 15:02:21 DEBUG : /: >Getattr: errc=0
2023/07/05 15:02:21 DEBUG : /: Statfs:
2023/07/05 15:02:21 DEBUG : /: >Statfs: stat={Bsize:4096 Frsize:4096 Blocks:274877906944 Bfree:274877906944 Bavail:274877906944 Files:1000000000 Ffree:1000000000 Favail:0 Fsid:0 Flag:0 Namemax:255}, errc=0
2023/07/05 15:02:21 DEBUG : /: Opendir:
2023/07/05 15:02:21 DEBUG : /: Releasedir: fh=0x0
2023/07/05 15:02:21 DEBUG : /: >Releasedir: errc=0
2023/07/05 15:02:21 DEBUG : /: OpenFile: flags=O_RDONLY, perm=-rwxrwxrwx
2023/07/05 15:02:21 DEBUG : /: >OpenFile: fd=/ (r), err=<nil>
2023/07/05 15:02:21 DEBUG : /: >Opendir: errc=0, fh=0x0
2023/07/05 15:02:21 DEBUG : /: Getpath: Getpath fh=0
2023/07/05 15:02:21 DEBUG : /: >Getpath: errc=0, normalisedPath=""
2023/07/05 15:02:21 DEBUG : /: Releasedir: fh=0x0
2023/07/05 15:02:21 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2023/07/05 15:02:21 DEBUG : /: >Getattr: errc=0
2023/07/05 15:02:21 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2023/07/05 15:02:21 DEBUG : /: >Getattr: errc=0
2023/07/05 15:02:21 DEBUG : /: >Releasedir: errc=0
2023/07/05 15:02:21 DEBUG : /: Opendir:
2023/07/05 15:02:21 DEBUG : /: OpenFile: flags=O_RDONLY, perm=-rwxrwxrwx
2023/07/05 15:02:21 DEBUG : /: >OpenFile: fd=/ (r), err=<nil>
2023/07/05 15:02:21 DEBUG : /: >Opendir: errc=0, fh=0x0
2023/07/05 15:02:21 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2023/07/05 15:02:21 DEBUG : /: >Getattr: errc=0
2023/07/05 15:02:21 DEBUG : /: Getpath: Getpath fh=0
2023/07/05 15:02:21 DEBUG : /: >Getpath: errc=0, normalisedPath=""
2023/07/05 15:02:21 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2023/07/05 15:02:21 DEBUG : /: Releasedir: fh=0x0
2023/07/05 15:02:21 DEBUG : /: >Releasedir: errc=0
2023/07/05 15:02:21 DEBUG : /: >Getattr: errc=0
2023/07/05 15:02:21 DEBUG : /: Opendir:
2023/07/05 15:02:21 DEBUG : /: OpenFile: flags=O_RDONLY, perm=-rwxrwxrwx
2023/07/05 15:02:21 DEBUG : /: >OpenFile: fd=/ (r), err=<nil>
2023/07/05 15:02:21 DEBUG : /: >Opendir: errc=0, fh=0x0
2023/07/05 15:02:21 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2023/07/05 15:02:21 DEBUG : /: Getpath: Getpath fh=0
2023/07/05 15:02:21 DEBUG : /: >Getpath: errc=0, normalisedPath=""
2023/07/05 15:02:21 DEBUG : /: >Getattr: errc=0
2023/07/05 15:02:21 DEBUG : /: Releasedir: fh=0x0
2023/07/05 15:02:21 DEBUG : /: >Releasedir: errc=0
2023/07/05 15:02:21 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2023/07/05 15:02:21 DEBUG : /: >Getattr: errc=0
2023/07/05 15:02:21 DEBUG : /: Opendir:
2023/07/05 15:02:21 DEBUG : /: OpenFile: flags=O_RDONLY, perm=-rwxrwxrwx
2023/07/05 15:02:21 DEBUG : /: >OpenFile: fd=/ (r), err=<nil>
2023/07/05 15:02:21 DEBUG : /: >Opendir: errc=0, fh=0x0
2023/07/05 15:02:21 DEBUG : /: Getpath: Getpath fh=0
2023/07/05 15:02:21 DEBUG : /: >Getpath: errc=0, normalisedPath=""
2023/07/05 15:02:21 DEBUG : /: Releasedir: fh=0x0
2023/07/05 15:02:21 DEBUG : /: >Releasedir: errc=0
2023/07/05 15:02:24 DEBUG : /autorun.inf: Getattr: fh=0xFFFFFFFFFFFFFFFF
2023/07/05 15:02:25 DEBUG : /autorun.inf: >Getattr: errc=-2
2023/07/05 15:02:25 DEBUG : /autorun.inf: Getattr: fh=0xFFFFFFFFFFFFFFFF
2023/07/05 15:02:25 DEBUG : /autorun.inf: >Getattr: errc=-2
2023/07/05 15:02:25 DEBUG : /autorun.inf: Getattr: fh=0xFFFFFFFFFFFFFFFF
2023/07/05 15:02:25 DEBUG : /autorun.inf: >Getattr: errc=-2
2023/07/05 15:02:25 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2023/07/05 15:02:25 DEBUG : /: >Getattr: errc=0
2023/07/05 15:02:25 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2023/07/05 15:02:25 DEBUG : /: >Getattr: errc=0
2023/07/05 15:02:25 DEBUG : /: Opendir:
2023/07/05 15:02:25 DEBUG : /: OpenFile: flags=O_RDONLY, perm=-rwxrwxrwx
2023/07/05 15:02:25 DEBUG : /: >OpenFile: fd=/ (r), err=<nil>
2023/07/05 15:02:25 DEBUG : /: >Opendir: errc=0, fh=0x0
2023/07/05 15:02:25 DEBUG : /: Getpath: Getpath fh=0
2023/07/05 15:02:25 DEBUG : /: >Getpath: errc=0, normalisedPath=""
2023/07/05 15:02:25 DEBUG : /: Releasedir: fh=0x0
2023/07/05 15:02:25 DEBUG : /: >Releasedir: errc=0
2023/07/05 15:02:25 DEBUG : /AutoRun.inf: Getattr: fh=0xFFFFFFFFFFFFFFFF
2023/07/05 15:02:25 DEBUG : /AutoRun.inf: >Getattr: errc=-2
2023/07/05 15:02:25 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2023/07/05 15:02:25 DEBUG : /: >Getattr: errc=0
2023/07/05 15:02:25 DEBUG : /: Getattr: fh=0xFFFFFFFFFFFFFFFF
2023/07/05 15:02:25 DEBUG : /: >Getattr: errc=0
2023/07/05 15:02:25 DEBUG : /: Opendir:
2023/07/05 15:02:25 DEBUG : /: OpenFile: flags=O_RDONLY, perm=-rwxrwxrwx
2023/07/05 15:02:25 DEBUG : /: >OpenFile: fd=/ (r), err=<nil>
2023/07/05 15:02:25 DEBUG : /: >Opendir: errc=0, fh=0x0
2023/07/05 15:02:25 DEBUG : /: Getpath: Getpath fh=0
2023/07/05 15:02:25 DEBUG : /: >Getpath: errc=0, normalisedPath=""
2023/07/05 15:02:25 DEBUG : /: Statfs:
2023/07/05 15:02:25 DEBUG : /: >Statfs: stat={Bsize:4096 Frsize:4096 Blocks:274877906944 Bfree:274877906944 Bavail:274877906944 Files:1000000000 Ffree:1000000000 Favail:0 Fsid:0 Flag:0 Namemax:255}, errc=0
2023/07/05 15:02:25 DEBUG : /: Releasedir: fh=0x0
2023/07/05 15:02:25 DEBUG : /: >Releasedir: errc=0

When cache is off (it is default BTW so you do not need --vfs-cache-mode) then you do not need --cache-dir

try:

rclone mount NAS:sharename N:

you can see your shares names by running:

rclone lsd NAS:

This is because you've mounted the root smb which contains all the individual servers, not an individual server.

This is fine, but rclone won't be able to read how much disk space they have used.

I don't think the ERROR logs will affect anything - do they?

Yep, it seems not affect anything, but why not this can't be a warning level log message? so that I won't post to ask it here...

I don't want to mount sub-folders, just want to mount the whole folder...

So you will always see this error in the log. Does it prevent your mount from working?

Seems not, but hope the team can change the log level to WARN rather than ERROR for this, so that users won't be confused.

I think there is no reason why this should be returning an error actually. It should just be returning empty info.

Try this

v1.64.0-beta.7130.3f588d9bb.fix-smb-about on branch fix-smb-about (uploaded in 15-30 mins)

1 Like

Thanks and agree, also if mounte subfolders can have the disk space info, so for root mounting, it that iterate all folders then do the count task very hard to implement?

I've merged this to master now which means it will be in the latest beta in 15-30 minutes and released in v1.63.1

It would be possible to accumulate all the free spaces of all the sub servers but i think in general this is a bad idea since it would be doing a log of requests to read each one and if one was down then it would hang the process etc.

1 Like

Great work, thanks!

It would be possible to accumulate all the free spaces of all the sub servers but i think in general this is a bad idea since it would be doing a log of requests to read each one and if one was down then it would hang the process etc.

Accumulate all won't be necessary for now, I do agree your opinion.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.