Stat: [Errno 60] Operation timed out

yes, i agree with you,

but i offered the OP two options.
i thought either of them would resolve the RC interface timing out issue, correct?

--- --timeout - not ideal with freebsd as rclone does not poll the mount.
--- _async=true - not ideal as need to use other rc commands to know when the prime of the vfs dir cache has completed and completed without error.

Timeout is just related to the HTTP requests going out so not related to polling as that would be dir-cache-time for a non polling remote.

Bit tricky as this hides the error. I believe if you submit to a mount with a default timeout, it would just fail in the background and you'd check to check for the status to know it failed.

I am not sure something like this would work or it would default to the timeout if the actual rc (mount in my case) running instead.

/usr/bin/rclone rc vfs/refresh recursive=true --rc-addr _async=true --timeout 30m

I think it just submits the command but not sure if the timeout flag is passed.

ok good to know tho i have seen a many posts using --timeout including from @VBB

i guess when the OP posts the results of --timeout 24h, we can confirm.

Yep, just saying I'm not sure 100% with timeout and with _async. If you run without _async, it definitely takes it 100%.

Hello guys,

~16 hours ago I started two commands:
rclone mount nv:shared /mnt/shared -vv --log-file /var/log/rclone.log --debug-fuse --rc --read-only --allow-other --dir-cache-time 96h
...and after that:
rclone rc vfs/refresh recursive=true --timeout 24h -vv --log-file /var/log/rclone-rc.log


2022/02/14 01:20:11 DEBUG : rclone: Version "v1.57.0" starting with parameters ["rclone" "rc" "vfs/refresh" "recursive=true" "--timeout" "24h" "-vv" "--log-file" "/var/log/rclone-rc.log"]
2022/02/14 12:14:06 DEBUG : 2 go routines active
2022/02/14 12:14:06 Failed to rc: connection failed: Post "http://localhost:5572/vfs/refresh": EOF

Command rclone rc ... finished a few hours ago and I started borg backup from mounted s3 bucket.
Currently I didn't hit timeout yet.

/var/log/rclone.log is currently 7G. In case I hit read timeout, what should I grep in this log?

P.S: my s3 works pretty fast. It answers slow because it has 5 millions of small files (1T total). It's not fast task to get metadata for all of them...

I'd look for ERROR and context around that. Also retry or retries.

I didn't hit timeout yet. rclone.log is currently 45G. Still waiting for backup finish.


Backup finished without errors.
When I did rclone rc vfs/refresh recursive=true --timeout 24h, rclone mount was performing caching operation ~12 hours and comsumed ~8G RAM.
Is there way to avoid cache operation and make rclone mount do not interrupt read operations by timeout?

Timeouts are normally caused by some kind of networking problem, firewalling or possibly rate limiting at the provider. Can you check any of those?

Non of this are possible because I mount minio s3 bucket via which is installed localy.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.