What is the recommended way of detecting disconnected rclone mounts?

I got bitten by the "endpoint is not connected" issue.

I didn't have the debug log activated, so I have no information what caused this but it made me thinking: is there a recommended way of getting notified or even activating an autoremount?

It seems that others are using bash scripts that are looking for existing files, but a permanently running cronjob polling for a file path doesn't appear to be a serious solution for me. I have the feeling that there must be something better.

Does the rclone mount functionality have some kind of "keepalive setting" or so?
Or is there any other good advice regarding this issue?

You would want to check the debug logs / logs for the mount and figure out why the issue is happening.

For systemd, you have many options to restart things as I use:

Restart=on-failure

in my systemd thing but my mount doesn't disconnect.

In general, rclone really doesn't fail unless it's crashes due to memory or someone has a bad setup/config with systemd that restarts it.

I'm more 'root cause' fix than monitoring on top of it and patching it.

As mentioned earlier, I didn't have the debug log activated when this issue occurred and I'm not using systemd to start the mount.
The system also has more than enough memory available.

The suggestion was if it's problematic to turn them on.

How are you starting and stopping the mount?

That's unknown as you haven't shared any details as I was sharing an example of a crash that can happen. If you want to share more details to help find a root cause, happy to help.

I know that but I didn't know that mounting S3 storage with the current stable version of rclone can be considered problematic. :wink:
I turned the debug log on after this happened, however.

The mounts are activated with the rclone mount command and disown as an unprivileged user.
For unmounting I'm closing the process and unmount with fusermount afterwards.

I would have mentioned it in case I had suspected that a lack of resources could the cause. But again: the amount of resources should be perfectly fine.

The current command I'm using to mount is:

rclone mount $path $mountpoint --log-file /some/path/rclone.log --log-level DEBUG

Again, I added the log parameters after this issue popped up.

The configuration is IMHO pretty standard:

[some_name_unencrypted]
type = s3
provider = Other
env_auth = false
access_key_id = SOME_KEY
secret_access_key = SOME_ACCESS_KEY
region = SOME_REGION
endpoint = https://some.endpoint.com
acl = private

[some_name]
type = crypt
remote = some_name_unencrypted:/foo
filename_encryption = standard
directory_name_encryption = true
password = SOME_PASSWORD
password2 = SOME_PASSWORD

and rclone should be up to date as well:

rclone v1.51.0
- os/arch: linux/amd64
- go version: go1.13.7

What other details are missing (except the debug log, that I currently can't provide because I wasn't expecting any problems)?

That is a good level of details but this error

Is really generic. It just means the mount stopped somehow. We won't know why unless we look in the logs!

I would say it should work pretty well :slight_smile:

You might want to consider adding --no-modtime to your mount command to save HEAD transactions for rclone looking up the modified dates on objects.

1 Like

It sounds like someone killed the process on you. Is there a reason you can't use something more robust like systemd?

You can run with --daemon rather than disowning as well as that will work better.

The way a fuse mount works generally is that it can only unmount if no processes have the mount point open so generally that error is when the process is killed but someone/something still using the mount point.

What did the log say if anything at all? If it just stopped working, that's more inline with the process being killed.

1 Like

Awesome, thank you for this advice!

Until now I thought that there's no reason to do so.

Awesome!

I didn't use the --log-file before and this means I don't have a log file at all, if I understand this correctly.

Well, it did write the output to stdout but you couldn't see it since it was a disowned process :slight_smile:

With a log file though, that would give an idea of potentially what happened so it's good to use.

So I'm currently migrating to a systemd user service and I saw that you configured everything with arguments except the path of the config file. Is there any particular reason why you didn't use the --config parameter?

I use this instead and set it for my systemd service and as an option in my bashrc rather than remembering to set it on every command I run:

systemd

Environment=RCLONE_CONFIG=/opt/rclone/rclone.conf

and my bashrc

# RClone Config file
RCLONE_CONFIG=/opt/rclone/rclone.conf
export RCLONE_CONFIG
1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.