Suggested settings to have an rclone-mounted Google Drive behave like Google's app

What is the problem you are having with rclone?

This is not a problem, rather information for guidance. I use the original made-for-Windows Google Drive application on my Windows rigs, setup so that all files are on-disk. When, Google Drive starts it obviously makes an rsync of some sort uploading/downloading diffs.

On my Arch installation I’ve created a systemd user unit .mount file, with the following settings (some info redacted) :

[Unit]
Description=Mount for /home/user/gdrive

[Mount]
Type=rclone
What=gdrive:
Where=/home/user/gdrive
Options=rw,_netdev,args2env,vfs-cache-mode=full,vfs-cache-max-age=72h,vfs-cache-max-size=10G,vfs-cache-poll-interval=1m,vfs-read-chunk-size=128M,vfs-read-chunk-size-limit=off

[Install]
WantedBy=default.target
~                          

I’ve also tried the above with Options set simply to:

Options=rw,_netdev,vfs-cache-mode=full

… with the same results (see below).

From my experience with rclone and to the best of my abilities (not a Linux/rclone expert guy here, just a newbie liking Linux here :slight_smile: ) I can not make the rclone-based mount behave like the Google Drive one, in the sense that everything does exist locally and rclone should only sync diffs from time to time.

Even when opening 20-30Mb PDF files with Ocular it takes ages. This is where your advice is needed, if I can somehow make this rclone-based mount behave similarly to the actual Gdrive client.

Run the command 'rclone version' and share the full output of the command.

$ rclone version
rclone v1.73.2
- os/version: arch (64 bit)
- os/kernel: 6.19.8-arch1-1 (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.26.1-X:nodwarf5
- go/linking: dynamic
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

No command issue - see initial description

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

 rclone config redacted
[gdrive]
type = drive
client_id = XXX
client_secret = XXX
scope = drive
token = XXX
team_drive = 
root_folder_id = XXX

A log from the command that you were trying to run with the -vv flag


Starting this up from a systemd unit file, don’t know how I can provide logs (or if they are needed in the scenario detailed).

1 Like

welcome to the forum,

that is how rclone mount works, cannot emulate the gdrive client.
this has been discussed a number of times in the forum.

note: --vfs-cache-mode=full, uses chunk downloads, so the entire file is not downloaded.
In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has downloaded.
So if an application only reads the starts of each file, then rclone will only buffer the start of the file.
These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them.


hard to know why, as no debug log was posted?
--log-level=DEBUG --log-file=/path/to/rclone.log

Have you considered bisync?

One of the nice things about it is that it avoids all the complications of VFS/cache stuff, by just dealing with real, local files.

2 Likes

I’ve read these, but thanks. I was thinking that if one configured these chunks to be large enough, then files up to the chunk size would be read in their entirety. Something I’m missing here.

With regard to my comment about the logs: what I meant is that I did not experience something not running correctly, hence for not pasting any logs.

If you do consider that in this case logs are needed, please let me know, but please do consider that what might seem easy to you took an enormous effort on my part (creating an rclone setup on Linux, systemd units etc).

@nielash this bisync seems to be exactly what I’m looking for, thanks! Is it considered a stable/safe feature if configured correctly?

1 Like

fwiw, for testing, use the command line and a debug log.
once it all is working, then systemd.


that is an assumption, not a guarantee that can be trusted.


in any event, hopefully bisync will solve your issue.
do you have enough free local space to store all the files in gdrive?

That’s what I did from the start :slight_smile: And it worked, it’s recently that I’ve realized that it is very slow (due to the way it operates, from what I understand).

More than enough. My gdrive is the default one (less than 15Gb), so I’m good on that front.

Digging a bit deeper in bisync docs, it seems it is basically an rsync of VFSs of sorts. I’ll have to think about this approach and its merits compared to the mounted VFS approach.

Thanks everyone for the help, rclone is an awesome piece of software!

1 Like

if you want all the files to stay in the cache.
vfs-cache-max-size=20G vfs-cache-max-age=9999y

in the forum, i have written about ways to force 100% of files and 100% of their contents into the cache.
for example,
rclone md5sum /home/user/gdrive --download


fwiw, i have a summary of the two rclone caches

1 Like

Yes. It was in beta for many years, but has been deemed stable since v1.71.

Depends what you mean... bisync doesn't use a VFS (unlike mount, for example). If you bisync between local and some cloud remote, the files on the local side are real, normal files (not virtual ones).

I think the easiest way to think about it is that it's like rclone sync, but it syncs in two directions instead of one.

1 Like

So bisync looks great in the sense that it creates normal entries in the filesystem. Hence no issues with VFS dir caching (from what @asdffdsa has kindly provided in the post above). However, my question is if the mount-type syncing is safer/faster/less obtrusive/you-name-it compared to the bisync-type syncing.

If it helps, content of my Gdrive is not music, but rather useful documents and stuff, so avoidance of file corruption is of primary importance. I also need snappy opening of these documents (either writable office ones, or PDFs), especially when studying PDFs in the order of 30-100Mbytes.

1 Like

In my (subjective and biased) opinion, bisync has the upper hand on stability/safety/performance, while mount has the upper hand on convenience.

bisync has a much easier task than mount. It doesn't have to be an operating system or a file server. It doesn't have to manage caches or sparse files. And the user doesn't have to access files through a fuse/nfs mount layer (which can itself be a source of bugs).

mount's killer feature is the ability to mount a filesystem without having to store it all on disk. If you don't need that feature, the added complexity of mount becomes harder to justify (in my opinion).

1 Like

(@nielash wanted to award also your post as solution, but I can select only a single one)

Considering that my home setup is Linux-based, how would go about setting this up? That is:

  1. if I want everything to end up under /home/user/gdrive, what is the command I should use, how should the following be modified?
rclone bisync remote1:path1 remote2:path2 --create-empty-src-dirs --compare size,modtime,checksum --slow-hash-sync-only --resilient -MvP --drive-skip-gdocs --fix-case --resync --dry-run

EDIT: That one should prolly be:

rclone bisync gdrive: /home/user/gdrive --create-empty-src-dirs --compare size,modtime,checksum --slow-hash-sync-only --resilient -MvP --drive-skip-gdocs --fix-case --resync --dry-run

In the first post I’m providing contents of my various rclone.conf and the respective .mount systemd unit if that helps.

  1. I think I saw a reference to have this use systemd timers. Any instructions you could provide on that part?

  2. Yesterday, while using the mount approach as suggested by @asdffdsa I noticed some lag spikes. Stopping the .mount eradicated the issue. I believe that this is due to the way the mount operates, rclone perhaps can only pull (no google pushes) so it has to rescan all 15Gb for changes every now and then, right (which will also be an issue with bisync)?

And something else: consider that I am creating locally file, inside the folder that is either VFS mounted, or bisync’ed.

  1. does either one of these methods have an advantage over the other, regarding how fast my changes are replicated from the local file system to my Gdrive

  2. does either one of these methods have an advantage over the other, considering how the network and cpu usage of my host is impacted?

1 Like

(@nielash wanted to award also your post as solution, but I can select only a single one)

Considering that my home setup is Linux-based, how would go about setting this up? That is:

  1. if I want everything to end up under /home/user/gdrive, what is the command I should use, how should the following be modified?
rclone bisync remote1:path1 remote2:path2 --create-empty-src-dirs --compare size,modtime,checksum --slow-hash-sync-only --resilient -MvP --drive-skip-gdocs --fix-case --resync --dry-run

EDIT: That one should prolly be:

rclone bisync gdrive: /home/user/gdrive --create-empty-src-dirs --compare size,modtime,checksum --slow-hash-sync-only --resilient -MvP --drive-skip-gdocs --fix-case --resync --dry-run

In the first post I’m providing contents of my various rclone.conf and the respective .mount systemd unit if that helps.

  1. I think I saw a reference to have this use systemd timers. Any instructions you could provide on that part?

  2. Yesterday, while using the mount approach as suggested by @asdffdsa I noticed some lag spikes. Stopping the .mount eradicated the issue. I believe that this is due to the way the mount operates, rclone perhaps can only pull (no google pushes) so it has to rescan all 15Gb for changes every now and then, right (which will also be an issue with bisync)?

Made a test run, but did not end well. Ended up with some a couple of lst-dry-new files, followed by lst-new ones. When I removed the –resync option, rclone complained that no .lst files existed, which was true (under ~/.cache/rclone/bisync the existing files were either .lst-dry-new or .lst-new).

1 Like

That's hard to answer, because which flags you use will depend on the specifics of what you want. But in general, that command looks like a good place to start, and you can tweak it as necessary if there's something you want to adjust. --dry-run should be removed when you're done testing and ready to run the command for real. And --resync should be removed after the first "real" run.

Maybe you're thinking of this?

Google Drive is one of the backends that supports ChangeNotify, so I think mount should be able to listen for changes. bisync does need to relist at the beginning of each run, but there are some adjustments you can make if it's taking too long -- for example, you can omit checksum (compare only size and modtime) which should speed things up on the local side (where hashes have to be calculated manually). On the drive side, the number of files will affect the speed more than their size. I find that --drive-pacer-min-sleep=10ms also helps significantly with speed.

As for what specifically caused the behavior you saw in mount, that is too hard to say without more info. Could be a lot of things.

The presence of lst-dry-new files suggests to me that the problem is the --dry-run flag. It should be removed when you're ready to proceed with a real run. A real run will not be able to see a listing file you generated during a dry run.

Were you able to successfully complete a --resync without --dry-run? Make sure you've done one successful --resync before you proceed to removing --resync.

If you can successfully bisync with --resync but not without, it would be helpful to see a log to better diagnose what's going on.

In general, mount will probably start syncing them faster, but it somewhat depends on your settings (--vfs-cache-poll-interval and --vfs-write-back in mount, and your cron interval in bisync).

mount will very likely require more resources than bisync. That's partly due to differences in design. A mount is continuously running and listening for changes. A bisync process only runs when you schedule it to.

2 Likes