Stat: [Errno 60] Operation timed out

What is the problem you are having with rclone?

I have self hosted s3 (minio). I need to backup bucket data use borg backup. When I try to backup rclone mounted bucket, borg backup cannot read some files to directories because of timeout:

/mnt/shared/system/MediaPhoto/images/000/121/911/thumb/d2506a638668e16c1eb3519afc7fee07.png: stat: [Errno 60] Operation timed out:
/mnt/shared/system/MediaPhoto/images/000/115/488: stat: [Errno 60] Operation timed out: '/mnt/shared/system/MediaPhoto/images/000/115/488'

Unfortunatelly my s3 sometimes answers slow (especially when overloaded).

Here is borg backup bug report:
It seems the problem somewhere between rclone and FUSE.

Run the command 'rclone version' and share the full output of the command.

rclone installed from FreeBSD ports.

rclone v1.57.0-DEV

  • os/version: freebsd 13.0-release-p7 (64 bit)
  • os/kernel: 13.0-release-p7 (amd64)
  • os/type: freebsd
  • os/arch: amd64
  • go/version: go1.17.3
  • go/linking: dynamic
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)


The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone mount lv:shared /mnt/shared --read-only --daemon --allow-other --daemon-timeout 0 --vfs-read-chunk-size-limit off --timeout 1h

The rclone config contents with secrets removed.

type = s3
provider = Other
access_key_id = shared
secret_access_key = pass
region = us
endpoint =

Is there way to make mounted directory to do not answer "Operation timed out" and wait longer for s3 answer?

Thank you!

hello and welcome to the forum,

so far not seeing how this is a rclone bug?

please test without --daemon, as that can be a problem with rclone mount

and best to use the official rclone client, which can only be installed from

Ok, changed to "Help and Support".

"rclone mount" works fine. All that I need is to make it wait longer for s3 respond. I don't think that running rclone without "--daemon" will make it wait longer.

I'll try next time.

took a quick look at the source code, from what i understood - for freebsd
--daemon-timeout defaults to 5 seconds and you have set it to zero

so perhaps
--- remove --daemon-timeout
--- set --daemon-timeout to a time value larger than the worst case expected wait time.

mount_fusefs doesn't support -timeout= option. I tried to set --daemon-timeout but rclone cannot mount bucket with it.
Also tried without --daemon-timeout option - the save problem.

when you posted, there was a template of questions, almost all of which you have not answered.
makes it harder to help you.
--- not using the office rclone client.
--- no redacted config file.
--- no rclone commands.
--- no rclone debug logs.

Ok, will use official rclone binary.

In 1st post.

Ok, will make it a little bit later. It's difficult to reproduce timeout. But it appear every time when I trying to backup of bucket with 1T of small files. Just need to wait a few hours after backup start.

sorry, yes, you did post the config.

so the issue is the intital wait while borg scans?
what is borg scanning for, filename,time stamp, size?

might try to pre-cache the vfs dir cache and then start borg

What is the best options to debug rclone in my case?
-vv --log-file /var/log/rclone.log --debug-fuse - is it ok?

It can timeout on any operation when access bucket files: scandir(), open(), stat() and others.
I tried to do find /mnt/shared -type d -exec ls {} \; but in that case borg timeouts on open() file. It's definitelly not borg problem because it sometimes also timeouts when I try to ls -la on bucket mounted directory.

i would prime the vfs dir cache.
this is common in the forum with rclone mount,
for example, a library scan for a media server such as plex.

--- add --rc to the mount command.
--- wait for the mount to be live.
--- run rclone rc vfs/refresh recursive=true
--- run borg

Thank you, I'll try.

might need to need to tweak/increase --dir-cache-time

I've running rclone

rclone mount lv:shared /mnt/shared --rc --read-only --allow-other --dir-cache-time 96h
After a few min after rclone rc vfs/refresh recursive=true I got:

2022/02/14 00:41:47 Failed to rc: connection failed: Post "http://localhost:5572/vfs/refresh": net/http: timeout awaiting response headers

rclone rc vfs/refresh recursive=true --timeout 10m
change --timeout as needed.

rclone rc vfs/refresh recursive=true _async=true
rclone will return immediately and the command will complete in the background.

The same timeout after 10m:

rclone rc vfs/refresh recursive=true --timeout 10m
2022/02/14 01:08:09 Failed to rc: connection failed: Post "http://localhost:5572/vfs/refresh": net/http: timeout awaiting response headers

Nothing in rclone mount log:

2022/02/14 00:57:37 DEBUG : rclone: Version "v1.57.0" starting with parameters ["/root/apps/rclone" "mount" "lv:shared" "/mnt/shared" "-vv" "--log-file" "/var/log/rclone.log" "--debug-fuse" "--rc" "--read-only" "--allow-other" "--dir-cache-time" "96h"]
2022/02/14 00:57:37 NOTICE: Serving remote control on http://localhost:5572/
2022/02/14 00:57:37 DEBUG : Creating backend with remote "lv:shared"
2022/02/14 00:57:37 DEBUG : Using config file from "/root/.config/rclone/rclone.conf"
2022/02/14 00:57:37 INFO  : S3 bucket shared: poll-interval is not supported by this remote
2022/02/14 00:57:37 DEBUG : S3 bucket shared: Mounting on "/mnt/shared"
2022/02/14 00:57:37 DEBUG : : Root: 
2022/02/14 00:57:37 DEBUG : : >Root: node=/, err=<nil>
2022/02/14 00:58:09 DEBUG : rc: "vfs/refresh": with parameters map[recursive:true]
2022/02/14 00:58:09 DEBUG : : Reading directory tree

that seems like a super slow s3 server.
as i wrote, need to tweak --timeout

and try to run the command with -vv for debug output and post that.

yes, there is something

DEBUG : rc: "vfs/refresh": with parameters map[recursive:true]
DEBUG : : Reading directory tree

Ok, I'll set --timeout 24h. Will write the result tomorrow.
Thank you.

This is a different problem. It is the RC interface timing out.

It probably has the same underlying cause though - the slow S3 server.

An rclone log from the mount with -vv would be helpful here.