Autofs + rclone mount for Google Drive

There are several forum posts struggling to get autofs working with rclone mount. (1, 2, 3, etc.)

Below is what worked for me to get on-demand access to Google Drive (both "My Files" and also all of my shared team drives).

  1. Create ~/.config/rclone/rclone.conf like usual (this file is owned by the user)
[gdrive-user@gmail.com]
type = drive
scope = drive
token = {"access_token": "...", "token_type":"Bearer","refresh_token": "...","expires_in":3599}
team_drive = 
  1. Add an entry to /etc/auto.master (owned by root)
/-    /etc/auto.gdrive browse,allow_other
  1. Create a /etc/auto.gdrive (owned by root)
/gdrive/user@gmail.com/MyDrive -fstype=rclone,uid=1000,gid=1000,vfs-cache-mode=full,config=/home/user/.config/rclone/rclone.conf :gdrive-username\@gmail.com\:/
/gdrive/user@gmail.com/Management\ Team -fstype=rclone,uid=1000,gid=1000,vfs-cache-mode=full,config=/home/user/.config/rclone/rclone.conf,drive-team-drive=9JhG2U8Abjd9g38 :gdrive-username\@gmail.com\:/
/gdrive/user@gmail.com/Maintenance\ Team -fstype=rclone,uid=1000,gid=1000,vfs-cache-mode=full,config=/home/user/.config/rclone/rclone.conf,drive-team-drive=8wEx2U8Abjd9g32 :gdrive-username\@gmail.com\:/
  1. Start autofs
service restart autofs
service enable autofs

That's all. It just works.

It also works if you have multiple Google Drive accounts. I create a python script to auto-generate auto.gdrive by enumerating all the Google Drive entries in rclone.conf and then enumerating all of the shared (team) drives on each account.

generate-gdrive-autofs.py

import json
import subprocess
import sys

CONFIG = "/home/luke/.config/rclone/rclone.conf"
UID = 1000
GID = 1000


def get_rclone_remotes():
    command = f"rclone --config={CONFIG} listremotes --type=drive"
    p = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE)
    p.wait()
    if p.returncode != 0:
        print(command)
        print("Command failed with code {}".format(p.returncode))
        sys.exit(1)

    lines = p.stdout.readlines() if p.stdout else []
    return [line.decode("ascii").strip() for line in lines]


def get_shared_drives(remote):
    command = f"rclone --config={CONFIG} backend drives {remote}"

    p = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE)
    p.wait()
    if p.returncode != 0:
        print(command)
        print("Command failed with code {}".format(p.returncode))
        sys.exit(1)
    return json.load(p.stdout) if p.stdout else []


mounts = []

for remote in get_rclone_remotes():
    name = remote.strip().lstrip("gdrive-").rstrip(":")
    print(name)

    quoted_remote = remote.replace("@", "\\@").replace(":", "\\:")
    mounts.append(
        f"/gdrive/{name}/MyDrive -fstype=rclone,uid={UID},gid={GID},vfs-cache-mode=full,config={CONFIG} :{quoted_remote}/"
    )

    for shared_drive in get_shared_drives(remote):
        print(f"  {shared_drive.get('name')}")
        mounts.append(
            f"/gdrive/{name}/{shared_drive.get('name').replace(' ', '\\ ')} "
            f"-fstype=rclone,uid={UID},gid={GID},vfs-cache-mode=full,config={CONFIG},drive-team-drive={shared_drive.get('id')} "
            f":{quoted_remote}/"
        )

for m in mounts:
    print(m)

If this is useful to anyone, perhaps there is a better way to package it and make it available as a systemd unit or something. Feedback welcome.