I didn't have the debug log activated, so I have no information what caused this but it made me thinking: is there a recommended way of getting notified or even activating an autoremount?
It seems that others are using bash scripts that are looking for existing files, but a permanently running cronjob polling for a file path doesn't appear to be a serious solution for me. I have the feeling that there must be something better.
Does the rclone mount functionality have some kind of "keepalive setting" or so?
Or is there any other good advice regarding this issue?
As mentioned earlier, I didn't have the debug log activated when this issue occurred and I'm not using systemd to start the mount.
The system also has more than enough memory available.
The suggestion was if it's problematic to turn them on.
How are you starting and stopping the mount?
That's unknown as you haven't shared any details as I was sharing an example of a crash that can happen. If you want to share more details to help find a root cause, happy to help.
I know that but I didn't know that mounting S3 storage with the current stable version of rclone can be considered problematic.
I turned the debug log on after this happened, however.
The mounts are activated with the rclone mount command and disown as an unprivileged user.
For unmounting I'm closing the process and unmount with fusermount afterwards.
I would have mentioned it in case I had suspected that a lack of resources could the cause. But again: the amount of resources should be perfectly fine.
The current command I'm using to mount is:
rclone mount $path $mountpoint --log-file /some/path/rclone.log --log-level DEBUG
Again, I added the log parameters after this issue popped up.
The configuration is IMHO pretty standard:
[some_name_unencrypted]
type = s3
provider = Other
env_auth = false
access_key_id = SOME_KEY
secret_access_key = SOME_ACCESS_KEY
region = SOME_REGION
endpoint = https://some.endpoint.com
acl = private
[some_name]
type = crypt
remote = some_name_unencrypted:/foo
filename_encryption = standard
directory_name_encryption = true
password = SOME_PASSWORD
password2 = SOME_PASSWORD
and rclone should be up to date as well:
rclone v1.51.0
- os/arch: linux/amd64
- go version: go1.13.7
What other details are missing (except the debug log, that I currently can't provide because I wasn't expecting any problems)?
It sounds like someone killed the process on you. Is there a reason you can't use something more robust like systemd?
You can run with --daemon rather than disowning as well as that will work better.
The way a fuse mount works generally is that it can only unmount if no processes have the mount point open so generally that error is when the process is killed but someone/something still using the mount point.
What did the log say if anything at all? If it just stopped working, that's more inline with the process being killed.
So I'm currently migrating to a systemd user service and I saw that you configured everything with arguments exceptthe path of the config file. Is there any particular reason why you didn't use the --config parameter?