Rclone + supervisor

Previously I was running rclone in a "screen" or "byobu" on my linux server which was working very well since I could ssh in and out without any issues and always read the logs in the console if necessary (using the -v flag)

the major downside is that rclone would not start automatically if the system was rebooted.

So I decided to start rclone as a service using "supervisor" to make it more stable.

While trying to automate the start and stop I noticed a couple of problems that are actually not connected to supervisor, but rather a problem of how I use rclone.

The major problem is that usually rclone doesn't quit 'clean' for me. Suppose I run rclone mount manually on my shell. I then quit rclone by Ctr+C and want to restart it, but it won't let me because the unmounting didn't work. At this point the folder is already not usable anymore, but blocks remounting.

whiteloader@whiteloader:~$ rclone mount crypt2: /mnt/shared/fusemounts/SB/ --allow-other -v --fast-list --buffer-size=16M  --vfs-cache-mode full --dir-cache-time 172h --vfs-read-chunk-size 16M --vfs-read-chunk-size-limit 512M --vfs-cache-max-age 5000h --vfs-cache-poll-interval 10m --umask 0 --vfs-write-back 5m --cache-dir /mnt/shared/media/cache/ --vfs-cache-max-size 1T
2021/01/24 21:53:30 Fatal error: Can not open: /mnt/shared/fusemounts/SB/: open /mnt/shared/fusemounts/SB/: transport endpoint is not connected

Well it's no problem if I do it manually, I simply run fusermount -uz /mnt/shared/fusemounts/SB and then I can mount again. However my supervisor can't do that and fails after 3 tries of starting rclone.

.

So my question is:

Is there a command line option for rclone that forces the remount ? (basically running fusermount -uz automatically if mounting fails. If not, could I workaround this somehow by cleverly using supervisor or scripting?
Alternatively is it possible to instruct rclone to force unmount the mount when it quits?

.

below is a copy of my supervisor config for you reference:

[program:rcloneSB]
command=rclone mount crypt2: /mnt/shared/fusemounts/suptest/ --allow-other -v --fast-list --buffer-size=16M  --vfs-cache-mode full --dir-cache-time 172h --vfs-read-chunk-size 16M --vfs-read-chunk-size-limit 512M --vfs-cache-max-age 5000h --vfs-cache-poll-interval 10m --umask 0 --vfs-write-back 5m --cache-dir /mnt/shared/media/cache/ --vfs-cache-max-size 1T
autostart=true
user=whiteloader
stdout_logfile=/home/whiteloader/rclonelogs/sb.log
stdout_logfile_maxbytes=100MB
stdout_logfile_backups=20
stderr_logfile=/home/whiteloader/rclonelogs/sb_err.loc
stderr_logfile_maxbytes=100MB
stderr_logfile_backups=10
environment=HOME="/home/whiteloader",USER="whiteloader"

another weird thing is that all my output from rclone in supervisor goes to stderr instead of stdout. Not sure if this is a rclone or supervisor problem though.

Thank you very much in advance!

What is the problem you are having with rclone?

rclone not exiting cleanly and then not able to restart automatically

What is your rclone version (output from rclone version)

rclone v1.53.4

  • os/arch: linux/amd64
  • go version: go1.15.6

Which OS you are using and how many bits (eg Windows 7, 64 bit)

linux ubuntu LTS 18.04

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The rclone config contents with secrets removed.

[gdrive]
type = drive
client_id = 1234.apps.googleusercontent.com
client_secret = 1234
scope = drive
root_folder_id =
service_account_file =
token = {"access_token":KF$}
team_drive = 0AKUk9PVA
upload_cutoff = 64M
chunk_size = 64M

[crypt2]
type = crypt
remote = gdrive:crypt
filename_encryption = standard
directory_name_encryption = true

Use systemd .service file?
Example: https://github.com/animosity22/homescripts/blob/master/systemd/rclone.service

excatly, that ExecStop=/bin/fusermount -uz /GD is kind of what I am looking for . However it appears that supervisord, which is installed on my system doesn't have the kind of functionality.

Actually that would would be very usefull to have this kind of functionality build into rclone.

I could imagine something like this:

rclone mount gcrypt: /GD --allow-other ... --force-mount

the new command line option --force-mount would try to unmount the mountpoint first to make sure nothing is blocking it. (run fusermount -uz /GD before actually mounting anything)

What do the devs think about it?

When unmounting in Linux, it's just calling fusermount as you have a fuse mount.

You should be able to find it in your environment.

Like mine is here:

felix@gemini:~$ which fusermount
/usr/bin/fusermount

I think we are misunderstanding each other. I am proposing a new command line option for rclone called "--force-mount"

What it does is that it will make sure the mounting succeeds by automatically running "fusermount -uz /mountpoint" first

A few useful scenarios were explained above.

fusermount doesn't fix your first scenario as you that scenario is it was killed and a process still has access to the mount point.

The goal is actually tying things properly together with services so if a fuser mount needs to be stopped, the processes are stopped that go with it.

Actually it does. If I run fusermount -uz /mnt/shared/fusemounts/SB I can remount without a problem afterwards. Sorry If I not made it clear in my first post.

If rclone would do that by itself (if necessary) it would make creating a service that much easier.

It doesn't as the issue isn't the umount, it's stuck IO on the mount point:

I've spent a lot of time with systemd tuning and various things and proven it out what the error is and how to resolve it.

Interesting. thank you for the screenshot.
It's funny, for me it always works after fusermount -uz.
Maybe because the offending process gives up soon after the mount disappears.

What do you do in this case?

It just happens to work because the IO goes away that was hitting as you can run a lot of fusermount and it will return without an issue.

All my systemd services that use my fuse mount tie back in and require rclone to be running to be up. If rclone goes down, all the dependent services go down.

Thank you, I see your setup is very well thought out.

I still believe that the option I mentioned would help me and probably a lot of others with a simpler setup.

That's why I would propose to include it in an upcoming release of rclone.

God bless you, thank you!

@ncw: do you think this would be a nice feature to bake into rclone?

I can see why you would want it, but I'd be worried it would cause data loss if used carelessly - eg you accidentally duplicated the mount points in your script.

I see your point. that's definitely something to keep in mind. On the other hand, since it is only activated by command line option I don't think it would bring much harm to the average user.

Anyway I still appreciate your feedback on this. Should it ever be implemented I'll be especially happy :slightly_smiling_face:

it's an interesting idea. i am going to use rclone as a volume provider with a container orchestrator. it will mount/unmount/remount in response to frequent container life-cycle events. current level of rclone mount stability rather worries me. i dont't know what approach will be accepted by rclone. i would definitely consider your approach among others on my fork.

What are the stability issues? I’ve never had my mount die or crash since I’ve been using rclone.

i don't know what is your "usage pattern" of the feature. i can just guess that your rclone mounts are activated when your box is booted, and rarely change afterwards. a similar setup works fine for me on my development desktop.

the specifics of orchestrated environment is that containers can migrate between servers in response to spiking loads and mounts will follow them: getting teared down on some servers and brought up on others. my recent experiments detected hangups during reconfigurations.

after i'm done with current assignments i'm going to setup a testbed to troubleshoot orchestrated mounts. i have a related github issue assigned, will report or propose a solution later. i will have more to say after i approach it once again.

Ah. That makes sense. That sounds more like the start and stop rather than the stability of the pints. Fuse mounts do get picky starting and stopping much like NFS mounts.

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.