Rclone mount w/ systemd when user logs in, unmounts @ logout

For those familiar with systemd: It seems like I could write a systemd config/script which would get invoked upon the 1st instance of a user's login (and not on the 2nd, 3rd, ... nth additional processes) - which would do a mount that would last until any last process of that user logs out or terminates. In between, the user may have other login sessions or even incoming ssh sessions that start, operate and themselves exit/logout. Only upon the last process owned by that user being terminated would the rclone unmount happen. (The system continues to run allowing other users continued access to their ${HOME} and files. Any next time the specific indicated user logs in again, would systemd again start the user UID's session tree's rclone mount.)

If I understand correctly, one could create a systemd control file like user@UID.slice (where UID is my user id, say 7445) acting as the umbrella for all user@UID logins/sessions. With this, a Google Drive mount within the user's ${HOME} dir tree would:

  1. become available the 1st time the user UID logs in or has a session started, or remotes in via ssh, ...
  2. remain available as long as the user UID has at least one process running, and
  3. an rclone unmount (likely via an fuser unmount command) would occur as the last of user UID's processes exits, then
  4. Repeat from 1. the next time user UID starts some processes

Do I have that about right?

I'm afraid I don't know enough about systemd to help here, but I'd love to see the result if you do achieve it!

Yes, of course you can write a oneshot service for systemd. There are plenty of ways to autostart rclone mounts at login and (not so necessary) unmount at shutdown. Don't forget about the rclone --daemon option which, in combination with (slightly more necssary) a mountpoint check is really all you need.

I'm certainly aware of several ways to start rclone from a ~/.bash_login (or similar) script, and the differences in usage instances whether or not the ~/.bashrc script is processed. The reason to ask about a systemd unit is so that no matter:

  1. how I get a process running with my user ID (UID),
  2. regardless of the pattern of further logins - ssh, GUI, su, etc.,
  3. which shell they are running (if any), and
  4. regardless of the pattern/order of logouts/exits of any of these processes

(i.e. there's some process with my UID running, regardless how it got started) the Google Drive mount point should be present in my ${HOME} directory tree, is visible to any of those processes during its lifetime, and any shell or program can traverse that tree.

The only thing I know of to accomplish this would be some careful crafting of running-or-not breadcrumbs & conditional invokes of rclone - except for occasional use of csh, zsh, other command-line shells or bare process creation.

To avoid all kinds of problems or complications, I'd like something running the whole time my systemd user slice is running. The systemd unit tree structure supposedly supports this - I can write a systemd script for systemctl --user usage, but I'm unsure how to "glue it" into the user-7455.slice or user@7445.service unit or other appropriate descendant.

You definitely know more than me about systemd; I've haven't written any complex scripts for it. It definitely is the best use case for something like this -- doesn't the system use it to mount nfs drives? maybe someone else can help but yif you're looking for assistance, but to me it sounds like you have it down

Thanks for pointing out the --daemon option, however the documentation (i.e. man rclone) does not describe very well what that option actually does. Is there a more complete description somewhere else?

Is it simply that rclone sets itself up to be disowned, not use stdout nor stderr, then fork()s* (with parent process exiting) to run as an independent process? (If so, I guess I should use the systemd [Service] unit option Type=forking - right?

*Or whatever equivalent rclone uses on Windows® to do a similar action...

You also referred to "a mountpoint check". Being a newbie to rclone I'm not sure what that means. Is it something I should do in the ExecStop= clause before the fusermount -u (i.e. unmount)? I could understand something that 1st flushes or sync's data to the Cloud.

So looking at the codebase for rclone, I found what I had suspected rclone is doing when it uses daemon:

You can either run mount in foreground mode or background(daemon) mode. Mount runs in
foreground mode by default, use the --daemon flag to specify background mode mode.
Background mode is only supported on Linux and OSX, you can only run mount in
foreground mode on Windows.

This means it is equivalent of starting a mount normally, hitting Ctrl-Z to sleep and typing disown. Basically the rclone process detaches from the shell as being it's parent and attaches to init (I believe is the default for detachment).

What I meant by a mountpoint check, is that I have setup rclone mounts up countless different ways, and one of the quickest is use a script that first calls mountpoint -q /my/mount/point (the q is quiet mode), and if that returns 0 then that location IS already a mountpoint (i.e. it is registered in /etc/mtab I believe), so you don't need to moutn again. Otherwise, if mountpoint returned anything other than 0, you can attempt the mount there.

This script can be called even in an cron job because it takes literally no resources to check if that mount is still alive -- although I only use that method for places like a seedbox where conditions are uncertain compared to your desktop.

Let me know if you have any questions.

1 Like

I usually run rclone without --daemon so systemd controls the process directly.

Here is the systemd file I use for serving beta.rclone.org. Note the Type=notify - rclone sends a systemd notification when the mount is ready.

Description=rclone mount

ExecStart=/usr/bin/rclone mount -v --read-only --config /home/www-data/.rclone.conf --cache-dir /home/www-data/.cache/rclone --dir-cache-time 1m --vfs-cache-mode full --vfs-cache-max-age 168h --allow-non-empty --allow-other --use-mmap=true --vfs-cache-max-size 30G --rc memstore:beta-rclone-org /mnt/beta.rclone.org
ExecStop=/bin/fusermount -uz /mnt/beta.rclone.org


You mention Type=notify, causing "rclone [to send] a systemd notification when the mount is ready".

While I am aware of the systemd API call sd_notify(), but I don't see any command line option (in your example) or rclone.conf item to cause that signal to be sent from rclone. Am I missing something?

Alternately, does specifying notify in the ….service file mean that systemctl — or whatever is processing that file — is the agent that sends the notification if/when the ExecStart=… clause completes with 0 (OK) status?

Rclone detects whether it is running under systemd and sends the sdnotify automatically when the mount is up.

1 Like

Hey All,

I recently spent some time figuring out how to get an rclone mount up and running VIA systemd. Maybe you are already beyond this point, but wanted to provide my example if not (also open to any issues you find with it!). It is a templated user systemd service, so any user can use it and you can easily create mounts for all of your remotes without changing any of the systemd service file.

I will include the service file at the bottom of my post, and an explanation here.

First, lets get the service installed. Lets say that you save the file to /etc/systemd/user/rclone@.service (note, the "@" at the end of the name of the service is what makes the service a templated service). After saving the file to the file system, be sure to issue the command to tell systemd to look for the new/changed service files with systemctl --user daemon-reload.

Next, we will go over the prerequisites for using the systemd mount. First off, you have to already have your remote configured via rclone. For this example, i am going to assume the remote you have configured is called "dropbox-personal". If you can run the command rclone lsd dropbox-personal: and get a listing of your top directories, then your remote is configured already. You will also need an empty directory, for which you have write permissions, where you will mount your remote. The service file's default location for this mount would be ~/dropbox-personal (based on the remote name). The service file also assumes a default rclone configuration file location of ~/.config/rclone/rclone.conf. The service file also assumes other defaults based on the rclone mount documentation. All of these defaults can be overridden in the file ~/.config/dropbox-personal.env. Overriding defaults and other advanced configurations will be discussed after the service file

Now that we have checked the prerequisites, it is time to mount your remote. you can mount it by issuing the command systemctl --user start rclone@dropbox-personal. Then, once you are sure it is working, you can enable it so that it occurs at login with the following command systemctl --user enable rclone@dropbox-personal.

Note: The script I am providing includes the --allow-other and --default-permissions parameters. Without --allow-other, no one other than your user can access the mount. This may or may not be desirable. Removing it will not stop the mount from working, only change who can access it. The --default-permissions parameter makes the mounted files respect the file permissions set on the file system and is only useful when used with --allow-other. If --default-permissions is not set when --allow-other is set, then anyone can do anything (read, write, execute) regardless of what the file permissions are set to.

#To use this, /etc/fuse.conf must have the user_allow_other option set
Description=RClone mount of users remote %i using filesystem permissions

#Set up environment

#Default arguments for rclone mount. Can be overridden in the environment file

#Overwrite default environment settings with settings from the file if present

#Use ExecStartPre to run conditions to make sure mount can occur
ExecStartPre=/usr/bin/test -x /usr/bin/rclone
ExecStartPre=/usr/bin/test -d "${MOUNT_DIR}"
ExecStartPre=/usr/bin/test -w "${MOUNT_DIR}"
#JMLTODO: Add test for directory being empty
ExecStartPre=/usr/bin/test -f "${RCLONE_CONF}"
ExecStartPre=/usr/bin/test -r "${RCLONE_CONF}"
#JMLTODO: Can't use pipe, need to do this another way
# ExecCondition=/usr/bin/rclone listremotes --config="${RCLONE_CONF}" | /usr/bin/grep -q "^${REMOTE_NAME}:$"

#Mount rclone fs
ExecStart=/usr/bin/rclone mount \
            --config="${RCLONE_CONF}" \
            --allow-other \
            --default-permissions \
            --cache-tmp-upload-path="${RCLONE_TEMP_DIR}/upload" \
            --cache-chunk-path="${RCLONE_TEMP_DIR}/chunks" \
            --cache-workers=8 \
            --cache-writes \
            --cache-dir="${RCLONE_TEMP_DIR}/vfs" \
            --cache-db-path="${RCLONE_TEMP_DIR}/db" \
            --no-modtime \
            --drive-use-trash \
            --stats=0 \
            --checkers=16 \
            --bwlimit=40M \
            --dir-cache-time=60m \
            --cache-info-age=60m \
            --attr-timeout="${RCLONE_MOUNT_ATTR_TIMEOUT}" \
#           --daemon-timeout="${RCLONE_MOUNT_DAEMON_TIMEOUT}" \
            --dir-cache-time="${RCLONE_MOUNT_DIR_CACHE_TIME}" \
            --dir-perms="${RCLONE_MOUNT_DIR_PERMS}" \
            --file-perms="${RCLONE_MOUNT_FILE_PERMS}" \
            --gid="${RCLONE_MOUNT_GID}" \
            --max-read-ahead="${RCLONE_MOUNT_MAX_READ_AHEAD}" \
            --poll-interval="${RCLONE_MOUNT_POLL_INTERVAL}" \
            --uid="${RCLONE_MOUNT_UID}" \
            --umask="${RCLONE_MOUNT_UMASK}" \
            --vfs-cache-max-age="${RCLONE_MOUNT_VFS_CACHE_MAX_AGE}" \
            --vfs-cache-max-size="${RCLONE_MOUNT_VFS_CACHE_MAX_SIZE}" \
            --vfs-cache-mode="${RCLONE_MOUNT_VFS_CACHE_MODE}" \
            --vfs-cache-poll-interval="${RCLONE_MOUNT_VFS_CACHE_POLL_INTERVAL}" \
            --vfs-read-chunk-size="${RCLONE_MOUNT_VFS_READ_CHUNK_SIZE}" \
            --vfs-read-chunk-size-limit="${RCLONE_MOUNT_VFS_READ_CHUNK_SIZE_LIMIT}" \
#            --volname="${RCLONE_MOUNT_VOLNAME}"
            "${REMOTE_NAME}:${REMOTE_PATH}" "${MOUNT_DIR}"

#Unmount rclone fs
ExecStop=/bin/fusermount -u "${MOUNT_DIR}"

#Restsart info


Advanced usage:

Any of the service file environment variables (Environment=...) can be overriden in a environment file. If you invoke the service as rclone@foobar, then the override file would be ~/.config/foobar.env. Below are some samples of what can be done with the override file. Please note that this file can contain any combination of variable specifications, it does not have to match the following examples.

The most common overrides I can see being used would be to override the rclone configuration location, the mount point, and/or the remote path to mount. To do that, I could create the following file


The above file would modify the rclone@foobar service so that the remote named foobar would be mounted at /mnt/some_custom_mount_point_dir, using a rclone configuration file located at ~/.rclone/rclone.conf, where the remote mounts everything under /some/directory/in/remote.

You can also use this override functionality to mount the same remote multiple times. This could be useful if you want to mount a personal share, but it also contains configuration files you want to mount somewhere else on your system.


This would use the foobar remote, mount it at /etc/ddclient, owned by the user and group root, starting at the remote directory /configurations/ddclient. You can then issue the commands systemctl --user start rclone@foobar and systemctl --user start rclone@foobar-ddclient to mount the foobar remote with all default options, but then also mount the configured custom configured foobar remote.

Sorry for the long post, but hopefully this is helpful!

This is my first foray into rclone, but I see it as promising! As a side note, I'd love to see this service (or some iteration of it/better option), included in rclone in the future to help out others trying to achieve this same thing!

Edit 1+2: Fixed bugs with the service file

thanks for sharing,

rclone has a wiki, you should create a post there.

Or at a minimum, post this in the how to category in its own thread.....

At your suggestion, my work has been included in the wiki


Very nice - thank you :smiley: