Google Drive Union Mount Looses VFS?

What is the problem you are having with rclone?

I have a set of 4 Google Drive mounts all mounted through a RClone Union. The mounts work perfectly by themselves, but when mounted through the union it seems after some period of time (I can't seem to figure out how long) the VFS just goes empty. I used a lot of the base from Animosity022, just modified it to be able to use ENV variables.

What is your rclone version (output from rclone version)

rclone v1.52.2

  • os/arch: linux/amd64
  • go version: go1.14.4

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Ubuntu 20.04 via Proxmox on an Intel Xeon

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

I load the mounts via a systemd service file (well actually 4, one for each union). Below is an example of one of them.

[Unit]
Description=RClone Union A
Wants=network-online.target
After=network-online.target

[Service]
Environment=RCLONE_CONFIG=/root/rclone/rclone.conf
Environment=RCLONE_LOG_DIR=/root/rclone/logs
Environment=GDRIVE_NAME=uniona
Environment=RC_PORT=5579
Environment=RC_USER=admin
Environment=RC_PASS=pass

Type=notify
KillMode=none
RestartSec=5
ExecStart=/usr/bin/rclone mount ${GDRIVE_NAME}: /mnt/merged/${GDRIVE_NAME} \
--config=${RCLONE_CONFIG} \
--allow-other \
--dir-cache-time 1000h \
--attr-timeout 1000h \
--poll-interval 1m \
--umask 002 \
--user-agent blahblahblah \
--rc \
--rc-addr :${RC_PORT} \
--rc-web-gui \
--rc-enable-metrics \
--rc-user=${RC_USER} \
--rc-pass=${RC_PASS} \
--rc-web-gui-no-open-browser \
--rc-web-gui-force-update \
--vfs-read-chunk-size 32M

ExecStop=/bin/fusermount -uz /mnt/rclone-${GDRIVE_NAME}
ExecStartPost=/usr/bin/rclone rc vfs/refresh recursive=true  \
--rc-user=${RC_USER} \
--rc-pass=${RC_PASS} \
--rc-addr 127.0.0.1:${RC_PORT} _async=true

Restart=on-failure


[Install]
# This unit should start when app.service is starting
#WantedBy=rclone-union-mounts.service
WantedBy=multi-user.target

The rclone config contents with secrets removed.

[gda]
type = drive
client_id = 
client_secret = 
scope = drive
team_drive = 
token = 

[gdb]
type = drive
client_id = 
client_secret = 
scope = drive
team_drive = 
token = 

[gdc]
type = drive
client_id = 
client_secret = 
scope = drive
team_drive = 
token = 

[gdd]
type = drive
client_id = 
client_secret = 
scope = drive
team_drive = 
token = 

[gde]
type = drive
client_id = 
client_secret = 
scope = drive
team_drive = 
token = 

[uniona]
type = union
upstreams = gda:foldera:ro gdb:foldera:ro gdc:foldera:ro

[unionb]
type = union
upstreams = gdd:folderb:ro gdd:folderb:ro gda:folderb:ro gdb:folderb:ro gdc:folderb:ro

[unionc]
type = union
upstreams = gda:folderc:ro gdb:folderc:ro gdc:folderc:ro

[uniond]
type = union
upstreams = gdd:folderd:ro gda:folderd:ro gdb:folderd:ro gdc:folderd:ro

A log from the command with the -vv flag

Not sure how to get a log from the mount. Should I dump it from journalctl or run it in a separate window? Also not sure what time period to post as I can seem to figure out when or why the VFS expires.

Thanks

I'm not sure I know what you mean.

Is the whole union mount failing? Is one mount failing in the union?

The mount doesn't fail, but the dir cache just goes "empty". For example if I mount the union and then let the vfs/refresh complete the "find" command on the directory returns instantly because the whole system is cached. If I wait a while.... a few days (even though the dir cache time is 1000h) and then do a find again it takes forever, as if the cache is empty.

If I mount the systems outside of the union that doesn't seem to happen.

Right now I setup a cron job to refresh the cache every sunday night to see if that helps. But I'm not sure since I can't seem to figure out when/why the dir cache seems to disappear.

Does that make more sense?

Sounds like something is invalidating it.

If you can generate a debug log, that is what would be needed.

Do the mounts with -vv and log to a file? I'll redo the mounts and update in a few days with the log

Yep, that would be what we're looking for.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.