Systemd mount - failed to unmount

I’ve been trying to get rclone to mount my encrypted volume via a systemd service on a Ubuntu 16.04 box for a few hours now, haven’t succeeded and would really appreciate some help. I have the following rclone config: drive -> cache -> crypt. I’m trying to mount the crypt into /media/gdrive (which exists on the filesystem).

I’ve created an rclone.service with the following contents:

[Unit]
Description=RClone Service
After=network-online.target
Wants=network-online.target

[Service]
Type=simple
User=root
Group=root
ExecStart=/usr/bin/rclone mount gcrypt: /media/gdrive \
   --allow-other \
   --dir-cache-time 72h \
   --cache-chunk-path /dev/shm \
   --cache-chunk-no-memory \
   --cache-chunk-size 10M \
   --cache-info-age 72h \
   --cache-db-purge \
   --cache-workers 6 \
   --buffer-size 0M \
   --umask 002 \
   --uid {my_user_id} \
   --gid {my_user_group} \
   --log-file /home/my_user/rclone.log

ExecStop=/bin/fusermount -uz /media/gdrive
Restart=on-abort

[Install]
WantedBy=default.target

However when I try to start the service (either manually or on a reboot) I get the following:

systemd[1]: Started RClone Service.
systemd[1]: rclone.service: Main process exited, code=exited, status=1/FAILURE
fusermount[1396]: /bin/fusermount: failed to unmount /media/gcrypt: Invalid argument
systemd[1]: rclone.service: Control process exited, code=exited status=1
systemd[1]: rclone.service: Unit entered failed state.
systemd[1]: rclone.service: Failed with result 'exit-code'.

What’s odd is that running the ExecStart and ExecStop commands with sudo in terminal works fine which leaves me at a loss because I’m not too savvy with systemd services. Can anyone tell me where I’m going wrong?

Also once I’m got this sorted what is the best way to get data onto gdrive? I have approx 1.5TB to backup. Is sync the way forward? Should I unmount before running it?

This is what my systemd config looks like for the mount which serves beta.rclone.org. Probably the major difference is the Type=notify - rclone understands systemd notifications so maybe that will help?

[Unit]
Description=rclone mount
Documentation=http://rclone.org/docs/
After=network-online.target

[Service]
Type=notify
User=www-data
Group=www-data
ExecStart=/usr/bin/rclone mount -v --read-only --config /home/www-data/.rclone.conf --cache-dir /home/www-data/.cache/rclone --dir-cache-time 1m --vfs-cache-mode full --vfs-cache-max-age 168h --allow-non-empty --allow-other memstore:beta-rclone-org /mnt/beta.rclone.org
ExecStop=/bin/fusermount -uz /mnt/beta.rclone.org
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

I would use rclone sync or rclone copy to transfer the data - that is the most reliable way. You don’t need to stop the mount - it will gradually appear in the mount when the --dir-cache-time 72h expires.

Can you manually run the command to unmount it and does that work?

I’d advise to not run as root, but that’s me as I don’t like to share thing.

@ncw - wouldn’t it appear based on the cache polling interval of 1 minute rather than the dir-cache time?

Yes you are right of course. The polling interval will poll for changes on drive every minute and changes will arrive much quicker than the directory expiring.

So I switched it out for Type=notify and it appears to be working with a couple of other changes.

Turns out I can run both the commands without the need for sudo. I’m guessing since I set the mount point to be owned by my user it’s not gonna complain.

I originally used my own user but it failed for other (most likely) permission reasons. I’ve taken your advice and switched back to using my user.

I fiddled a little more and the following systemd service appears to work (using systemctl start rclone.service in terminal)…

[Unit]
Description=RClone Service
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
User=my_user
Group=my_user
ExecStart=/usr/bin/rclone mount gcrypt: /media/gdrive \
   --allow-other \
   --dir-cache-time 72h \
   --cache-chunk-path /dev/shm \
   --cache-chunk-no-memory \
   --cache-chunk-size 10M \
   --cache-info-age 72h \
   --cache-db-purge \
   --cache-workers 6 \
   --buffer-size 0M \
   --umask 002 \
   --rc \
   --log-level INFO \
   --log-file /home/my_user/rclone.log

ExecStop=/bin/fusermount -uz /media/gdrive
Restart=on-abort

[Install]
WantedBy=default.target

I’m still not entirely sure why it wouldn’t work before but all seems well now (!?).

Do I need to worry about api bans here or does the cache handle that? I see a --bwlimit argument. Is that my friend when syncing 1.5TB of data?

Thanks for the help so far, really appreciate it.

If you are using the cache, you should not hit any API bans.

You can either limit the transfer with the bwlimit and let it run majority continuously or you can max-transfer and just maybe do 500GB a day. I tend to leave some buffer as I am more conservative.

Thanks for the suggestions @Animosity022 I’ll bear them in mind when syncing.

It turns out I have to unmount the gdrive prior to running sync which seems a little odd. It complained that there was no cache.db or something to that effect (even though the cache.db file mentioned exists locally). I guess it’s not really a problem while I sync everything. Afterward, I’m guessing if I want to use the gcrypt instead of the local storage I just put files and folders in the mount and it does all the necessary.

Ah ha. I think I’ve found the answer to my issue with not being able to mount and sync at the same time…

I’ll create a second crypt and give it another go!