Trying to use systemd to mount Google Drive on CentOS 7.9, but script can't write into a mounted folder

What is the problem you are having with rclone?

A command that's run manually can write into a Google Drive rclone remote just fine. But when the same command is run via a script being called by systemd, the script doesn't seem to have write permissions.

Run the command 'rclone version' and share the full output of the command.

$ rclone version
rclone v1.60.1
- os/version: centos 7.9.2009 (64 bit)
- os/kernel: 3.10.0-1160.80.1.el7.centos.plus.x86_64 (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.19.3
- go/linking: static
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

I'm trying to have systemd always mount the Google Drive remote at boot. So my systemd timer looks like this:

[Unit]
Description=Backup of DaVinci Resolve PostgreSQL databases to Google Drive via rclone
[Service]
User=sgoldin
Type=oneshot
ExecStart=/bin/bash -c '/usr/bin/rclone mount resolve-backups: /home/sgoldin/resolve-backups --vfs-cache-mode full --umask 000'
[Install]
WantedBy=multi-user.target

The rclone config contents with secrets removed.

$ cat /home/sgoldin/.config/rclone/rclone.conf
[resolve-backups]
type = drive
client_id = REDACTED
client_secret = REDACTED
scope = drive
token = REDACTED
team_drive = 
root_folder_id = REDACTED

A log from the command with the -vv flag

If I stop the systemd unit, I can run the command by itself: $ rclone mount resolve-backups: /home/sgoldin/resolve-backups --vfs-cache-mode full --umask 000 -vv

$ rclone mount resolve-backups: /home/sgoldin/resolve-backups --vfs-cache-mode full --umask 000 -vv
2022/11/19 15:09:39 DEBUG : rclone: Version "v1.60.1" starting with parameters ["rclone" "mount" "resolve-backups:" "/home/sgoldin/resolve-backups" "--vfs-cache-mode" "full" "--umask" "000" "-vv"]
2022/11/19 15:09:39 DEBUG : Creating backend with remote "resolve-backups:"
2022/11/19 15:09:39 DEBUG : Using config file from "/home/sgoldin/.config/rclone/rclone.conf"
2022/11/19 15:09:39 DEBUG : vfs cache: root is "/home/sgoldin/.cache/rclone"
2022/11/19 15:09:39 DEBUG : vfs cache: data root is "/home/sgoldin/.cache/rclone/vfs/resolve-backups"
2022/11/19 15:09:39 DEBUG : vfs cache: metadata root is "/home/sgoldin/.cache/rclone/vfsMeta/resolve-backups"
2022/11/19 15:09:39 DEBUG : Creating backend with remote "/home/sgoldin/.cache/rclone/vfs/resolve-backups/"
2022/11/19 15:09:39 DEBUG : fs cache: renaming cache item "/home/sgoldin/.cache/rclone/vfs/resolve-backups/" to be canonical "/home/sgoldin/.cache/rclone/vfs/resolve-backups"
2022/11/19 15:09:39 DEBUG : Creating backend with remote "/home/sgoldin/.cache/rclone/vfsMeta/resolve-backups/"
2022/11/19 15:09:39 DEBUG : fs cache: renaming cache item "/home/sgoldin/.cache/rclone/vfsMeta/resolve-backups/" to be canonical "/home/sgoldin/.cache/rclone/vfsMeta/resolve-backups"
2022/11/19 15:09:39 DEBUG : Google drive root '': Mounting on "/home/sgoldin/resolve-backups"
2022/11/19 15:09:39 INFO  : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2022/11/19 15:09:39 DEBUG : : Root: 
2022/11/19 15:09:39 DEBUG : : >Root: node=/, err=<nil>
2022/11/19 15:09:39 DEBUG : /: Lookup: name=".Trash"
2022/11/19 15:09:40 DEBUG : /: >Lookup: node=<nil>, err=no such file or directory
2022/11/19 15:09:40 DEBUG : /: Lookup: name="BDMV"
2022/11/19 15:09:40 DEBUG : /: >Lookup: node=<nil>, err=no such file or directory
2022/11/19 15:09:40 DEBUG : /: Lookup: name=".xdg-volume-info"
2022/11/19 15:09:40 DEBUG : /: >Lookup: node=<nil>, err=no such file or directory
2022/11/19 15:09:40 DEBUG : /: Lookup: name="autorun.inf"
2022/11/19 15:09:40 DEBUG : /: >Lookup: node=<nil>, err=no such file or directory
2022/11/19 15:09:40 DEBUG : /: ReadDirAll: 
2022/11/19 15:09:40 DEBUG : /: >ReadDirAll: item=58, err=<nil>
2022/11/19 15:09:40 DEBUG : /: Lookup: name=".Trash-1000"
2022/11/19 15:09:40 DEBUG : /: >Lookup: node=<nil>, err=no such file or directory
2022/11/19 15:09:40 DEBUG : /: ReadDirAll: 
2022/11/19 15:09:40 DEBUG : /: >ReadDirAll: item=58, err=<nil>
2022/11/19 15:09:40 DEBUG : /: ReadDirAll: 
2022/11/19 15:09:40 DEBUG : /: >ReadDirAll: item=58, err=<nil>
2022/11/19 15:09:40 DEBUG : /: ReadDirAll: 
2022/11/19 15:09:40 DEBUG : /: >ReadDirAll: item=58, err=<nil>
2022/11/19 15:09:40 DEBUG : /: ReadDirAll: 
2022/11/19 15:09:40 DEBUG : /: >ReadDirAll: item=58, err=<nil>
2022/11/19 15:09:40 DEBUG : /: ReadDirAll: 
2022/11/19 15:09:40 DEBUG : /: >ReadDirAll: item=58, err=<nil>
2022/11/19 15:09:40 DEBUG : /: ReadDirAll: 
2022/11/19 15:09:40 DEBUG : /: >ReadDirAll: item=58, err=<nil>
2022/11/19 15:09:40 DEBUG : /: ReadDirAll: 
2022/11/19 15:09:40 DEBUG : /: >ReadDirAll: item=58, err=<nil>
2022/11/19 15:09:40 DEBUG : /: ReadDirAll: 
2022/11/19 15:09:40 DEBUG : /: >ReadDirAll: item=58, err=<nil>
2022/11/19 15:09:40 DEBUG : /: ReadDirAll: 
2022/11/19 15:09:40 DEBUG : /: >ReadDirAll: item=58, err=<nil>
2022/11/19 15:09:40 DEBUG : /: ReadDirAll: 
2022/11/19 15:09:40 DEBUG : /: >ReadDirAll: item=58, err=<nil>
2022/11/19 15:09:40 DEBUG : /: ReadDirAll: 
2022/11/19 15:09:40 DEBUG : /: >ReadDirAll: item=58, err=<nil>
2022/11/19 15:09:40 DEBUG : /: ReadDirAll: 
2022/11/19 15:09:40 DEBUG : /: >ReadDirAll: item=58, err=<nil>
2022/11/19 15:09:40 DEBUG : /: ReadDirAll: 
2022/11/19 15:09:40 DEBUG : /: >ReadDirAll: item=58, err=<nil>
2022/11/19 15:09:40 DEBUG : /: ReadDirAll: 
2022/11/19 15:09:40 DEBUG : /: >ReadDirAll: item=58, err=<nil>
2022/11/19 15:09:40 DEBUG : /: ReadDirAll: 
2022/11/19 15:09:40 DEBUG : /: >ReadDirAll: item=58, err=<nil>
2022/11/19 15:09:40 DEBUG : /: ReadDirAll: 
2022/11/19 15:09:40 DEBUG : /: >ReadDirAll: item=58, err=<nil>
2022/11/19 15:09:40 DEBUG : /: ReadDirAll: 
2022/11/19 15:09:40 DEBUG : /: >ReadDirAll: item=58, err=<nil>
2022/11/19 15:09:40 DEBUG : /: ReadDirAll: 
2022/11/19 15:09:40 DEBUG : /: >ReadDirAll: item=58, err=<nil>
2022/11/19 15:09:40 DEBUG : /: ReadDirAll: 
2022/11/19 15:09:40 DEBUG : /: >ReadDirAll: item=58, err=<nil>
2022/11/19 15:09:40 DEBUG : /: ReadDirAll: 
2022/11/19 15:09:40 DEBUG : /: >ReadDirAll: item=58, err=<nil>
2022/11/19 15:09:40 DEBUG : /: ReadDirAll: 
2022/11/19 15:09:40 DEBUG : /: >ReadDirAll: item=58, err=<nil>
2022/11/19 15:09:40 DEBUG : /: ReadDirAll: 
2022/11/19 15:09:40 DEBUG : /: >ReadDirAll: item=58, err=<nil>
2022/11/19 15:09:40 DEBUG : /: Lookup: name="autorun.inf"
2022/11/19 15:09:40 DEBUG : /: >Lookup: node=<nil>, err=no such file or directory

The command I can run successfully when this is mounted is:

$ /usr/pgsql-9.5/bin/pg_dump --host localhost --username postgres bigthink2022h1 --blobs --file /home/sgoldin/resolve-backups/bigthink2022h1/bigthink2022h1_$(date "+%Y_%m_%d_%H_%M").backup --format=custom --verbose --no-password

And indeed, a .backup file drops into the expected location within the mounted Google Drive rclone remote.

But when I call this shell script:

$ cat backup-bigthink2022h1.sh 
#!/bin/bash
# Let's perform the backup and log to the monthly log file if the backup is successful.
/usr/pgsql-9.5/bin/pg_dump --host localhost --username postgres bigthink2022h1 --blobs --file /home/sgoldin/resolve-backups/bigthink2022h1/bigthink2022h1_$(date "+%Y_%m_%d_%H_%M").backup --format=custom --verbose --no-password && \
echo "bigthink2022h1 was backed up at $(date "+%Y_%m_%d_%H_%M") into \"/home/sgoldin/resolve-backups/bigthink2022h1\"." >> /usr/local/DaVinci-Resolve-PostgreSQL-Workflow-Tools/logs/logs-$(date "+%Y_%m").log

From this systemd unit:

$ cd /etc/systemd/system/
$ cat backup-bigthink2022h1.service 
[Unit]
Description=Backup of bigthink2022h1 DaVinci Resolve PostgreSQL database

[Service]
Type=oneshot
ExecStart=/usr/local/DaVinci-Resolve-PostgreSQL-Workflow-Tools/backup/backup-bigthink2022h1.sh

The systemd unit fails:

$ systemctl status backup-bigthink2022h1 -l
● backup-bigthink2022h1.service - Backup of bigthink2022h1 DaVinci Resolve PostgreSQL database
   Loaded: loaded (/etc/systemd/system/backup-bigthink2022h1.service; static; vendor preset: disabled)
   Active: failed (Result: exit-code) since Sat 2022-11-19 14:53:08 EST; 26min ago
  Process: 11988 ExecStart=/usr/local/DaVinci-Resolve-PostgreSQL-Workflow-Tools/backup/backup-bigthink2022h1.sh (code=exited, status=1/FAILURE)
 Main PID: 11988 (code=exited, status=1/FAILURE)

Nov 19 14:53:08 localhost.localdomain systemd[1]: Starting Backup of bigthink2022h1 DaVinci Resolve PostgreSQL database...
Nov 19 14:53:08 localhost.localdomain backup-bigthink2022h1.sh[11988]: pg_dump: [custom archiver] could not open output file "/home/sgoldin/resolve-backups/bigthink2022h1/bigthink2022h1_2022_11_19_14_53.backup": Permission denied
Nov 19 14:53:08 localhost.localdomain systemd[1]: backup-bigthink2022h1.service: main process exited, code=exited, status=1/FAILURE
Nov 19 14:53:08 localhost.localdomain systemd[1]: Failed to start Backup of bigthink2022h1 DaVinci Resolve PostgreSQL database.
Nov 19 14:53:08 localhost.localdomain systemd[1]: Unit backup-bigthink2022h1.service entered failed state.
Nov 19 14:53:08 localhost.localdomain systemd[1]: backup-bigthink2022h1.service failed.

You can also see this failure via journalctl -xe:

$ journalctl -xe | grep backup-bigthink2022h1.service
-- Subject: Unit backup-bigthink2022h1.service has begun start-up
-- Unit backup-bigthink2022h1.service has begun starting up.
Nov 19 14:53:08 localhost.localdomain systemd[1]: backup-bigthink2022h1.service: main process exited, code=exited, status=1/FAILURE
-- Subject: Unit backup-bigthink2022h1.service has failed
-- Unit backup-bigthink2022h1.service has failed.
Nov 19 14:53:08 localhost.localdomain systemd[1]: Unit backup-bigthink2022h1.service entered failed state.
Nov 19 14:53:08 localhost.localdomain systemd[1]: backup-bigthink2022h1.service failed.
Nov 19 15:17:44 localhost.localdomain systemd[1]: Configuration file /etc/systemd/system/backup-bigthink2022h1.service is marked executable. Please remove executable permission bits. Proceeding anyway.
Nov 19 15:17:46 localhost.localdomain systemd[1]: Configuration file /etc/systemd/system/backup-bigthink2022h1.service is marked executable. Please remove executable permission bits. Proceeding anyway.
Nov 19 15:21:39 localhost.localdomain systemd[1]: Configuration file /etc/systemd/system/backup-bigthink2022h1.service is marked executable. Please remove executable permission bits. Proceeding anyway.

Why does the pg_dump binary not have permission to open that output file?

I've tried changing the command in the systemd unit to have the --allow-other flag, and then the folder remains totally unmounted and empty after boot. When I tried only adding --allow-root, in case the systemd service was running as root, it still didn't have permission, and failed the same way.

I want to make sure I'm falling into an XY problem, so my question is: how can I get this pg_dump binary in my shell script to actually have write access to my Google Drive remote, while also making sure that the Google Drive remote mounts automatically at boot?

What's the output if you just pg_dump as the user you are trying to execute it as? What's the output?

I run: $ bash /usr/local/DaVinci-Resolve-PostgreSQL-Workflow-Tools/backup/backup-bigthink2022h1.sh

and that's totally successful. But from $ rclone mount resolve-backups: /home/sgoldin/resolve-backups --vfs-cache-mode full --umask 0 -vv, I see this output. Apologies for the .zip, but it's a large log file.

But that's a successful run with just the sgoldin user running the bash script.

Taking systemd out of the mix, your script runs without issue, correct?

That would generally rule out the mount, etc as it would be something specific with systemd.

Second step would to run that same script as root and see if that works as your service file has no user name.

That should fail as your mount doesn't have allow-other. Fuse mounts only allow the user mounting it see the contents unless you use allow other.

root@gemini:/home/felix# ls -al | grep test
ls: cannot access 'test': Permission denied
d????????? ? ?     ?         ?            ? test
root@gemini:/home/felix# ps -ef | grep test | grep rclone
felix     712896    7771  0 16:12 pts/1    00:00:00 rclone mount GD: /home/felix/test
1 Like

Indeed, the issue seems to be that root somehow doesn't have permission:

$ sudo bash /usr/local/DaVinci-Resolve-PostgreSQL-Workflow-Tools/backup/backup-bigthink2022h1.sh 
[sudo] password for sgoldin: 
pg_dump: [custom archiver] could not open output file "/home/sgoldin/resolve-backups/bigthink2022h1/bigthink2022h1_2022_11_19_16_18.backup": Permission denied

This is the same error thrown by the systemd service.

But even when I run $ rclone mount resolve-backups: /home/sgoldin/resolve-backups --vfs-cache-mode full --umask 000 --allow-root -vv &> rclone_debug-3.log, I'm still seeing that root doesn't have permission:

$ sudo bash /usr/local/DaVinci-Resolve-PostgreSQL-Workflow-Tools/backup/backup-bigthink2022h1.sh 
pg_dump: [custom archiver] could not open output file "/home/sgoldin/resolve-backups/bigthink2022h1/bigthink2022h1_2022_11_19_16_21.backup": Permission denied

Allow other not root.

1 Like

Trying --allow-other with -vv, I see:

$ rclone mount resolve-backups: /home/sgoldin/resolve-backups --vfs-cache-mode full --umask 000 --allow-other -vv
2022/11/19 16:25:41 DEBUG : rclone: Version "v1.60.1" starting with parameters ["rclone" "mount" "resolve-backups:" "/home/sgoldin/resolve-backups" "--vfs-cache-mode" "full" "--umask" "000" "--allow-other" "-vv"]
2022/11/19 16:25:41 DEBUG : Creating backend with remote "resolve-backups:"
2022/11/19 16:25:41 DEBUG : Using config file from "/home/sgoldin/.config/rclone/rclone.conf"
2022/11/19 16:25:41 DEBUG : vfs cache: root is "/home/sgoldin/.cache/rclone"
2022/11/19 16:25:41 DEBUG : vfs cache: data root is "/home/sgoldin/.cache/rclone/vfs/resolve-backups"
2022/11/19 16:25:41 DEBUG : vfs cache: metadata root is "/home/sgoldin/.cache/rclone/vfsMeta/resolve-backups"
2022/11/19 16:25:41 DEBUG : Creating backend with remote "/home/sgoldin/.cache/rclone/vfs/resolve-backups/"
2022/11/19 16:25:41 DEBUG : fs cache: renaming cache item "/home/sgoldin/.cache/rclone/vfs/resolve-backups/" to be canonical "/home/sgoldin/.cache/rclone/vfs/resolve-backups"
2022/11/19 16:25:41 DEBUG : Creating backend with remote "/home/sgoldin/.cache/rclone/vfsMeta/resolve-backups/"
2022/11/19 16:25:41 DEBUG : fs cache: renaming cache item "/home/sgoldin/.cache/rclone/vfsMeta/resolve-backups/" to be canonical "/home/sgoldin/.cache/rclone/vfsMeta/resolve-backups"
2022/11/19 16:25:41 DEBUG : Google drive root '': Mounting on "/home/sgoldin/resolve-backups"
2022/11/19 16:25:41 DEBUG : vfs cache RemoveNotInUse (maxAge=3600000000000, emptyOnly=false): item bigthink2022h1/bigthink2022h1_2022_11_19_15_37.backup not removed, freed 0 bytes
2022/11/19 16:25:41 DEBUG : vfs cache RemoveNotInUse (maxAge=3600000000000, emptyOnly=false): item bigthink2022h1/bigthink2022h1_2022_11_19_15_42.backup not removed, freed 0 bytes
2022/11/19 16:25:41 DEBUG : vfs cache RemoveNotInUse (maxAge=3600000000000, emptyOnly=false): item bigthink2022h1/bigthink2022h1_2022_11_19_15_52.backup not removed, freed 0 bytes
2022/11/19 16:25:41 INFO  : vfs cache: cleaned: objects 3 (was 3) in use 0, to upload 0, uploading 0, total size 635.064Mi (was 635.064Mi)
2022/11/19 16:25:41 mount helper error: fusermount: option allow_other only allowed if 'user_allow_other' is set in /etc/fuse.conf
2022/11/19 16:25:41 Fatal error: failed to mount FUSE fs: fusermount: exit status 1

So I guess I need to go attack that in /etc/fuse.conf.

Yes. That’s correct.

1 Like

Perfect!

Going into /etc/fuse.conf and uncommenting the user_allow_other line seems to have worked so that the systemd service, and root generally, can access the mounted Google Drive remote!

1 Like

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.