Is it possible for multiple rclone mounts to share the same VFS cache?
Right now I have multiple mounts for each different services. I would like them all to use the same cache, but I'm wondering if this will cause some strange issues.
Additionally I have another question, what happens when a mount runs out of quota? Is there a simple command I can use in a bash script to restart the mount if it runs out of quota? (e.g. with anther user).
Not really as the other mount isn't aware of what's being put there as it would just confuse it.
You get an error writing to the mount like you would on a full disk.
For which backend? You can have another remote with another user or stop the mount and restart it with a new user. There is working being done with Google Drive to re-authenticate a running mount or you can even look at the new union remote as well as work is being done there and you could combine a few remotes as well. Really depends on what you are trying to do.
Ah sorry I should have been more clear, I'm currently running mergerfs using essentially the same setup you have (I used your scripts as a template).
So local and a gdrive merged together.
For union your suggesting a mount two different drives together in a union and have mergerfs then merge the union and local dir? That way if one gdrive fails I would atleast have read-only access to the other union drive right? (it seems that union only supports writing to one drive).
Something like this?
local+union (mergerfs) -> gdrive1+gdrive2 (union) -> gdrive
Is there a particular feature request or github issue I could follow regarding re-authenticating and existing mount?
I could write a shell script to do it, but I'm not quite sure how I would detect if the mount is dead or not in order to kill it and re-authenticate (is there an RC command for this)?
This post has the re-authenticated for Google Drive:
That's not my personal use case as other folks have talked about that in other posts. There is a new union remote update coming soon that has mergerfs like policies and such so that might be something to check out. There are a few posts on that as well. I personally use an unlimited drive and don't generate enough daily volume to ever come close to the 10TB download or per file quotas.
It is possible to start mounts via the rc in the latest beta, so you could run more than one mount in the same rclone process. These would share VFS caches for the same remote names I think.
The feature is a bit embryonic at the moment as it doesn't allow you to pass the vfs options yet but you could start one mount with the vfs options you want then start the futher ones via the rc.
mount/mount: Create a new mount point {#mount/mount}
rclone allows Linux, FreeBSD, macOS and Windows to mount any of
Rclone's cloud storage systems as a file system with FUSE.
If no mountType is provided, the priority is given as follows: 1. mount 2.cmount 3.mount2
This takes the following parameters
fs - a remote path to be mounted (required)
mountPoint: valid path on the local machine (required)
mountType: One of the values (mount, cmount, mount2) specifies the mount implementation to use
That is quite interesting. To be able to start consolidating these...
I was playing with this but it doesn't appear to share the same cache though.
If I start a mount and then run a "rclone rc vfs/refresh" against it results return immediate afterward. If I then do a rclone rc mount/mount using the same fs it does a full read without cache.
EDIT: Okay, i'm wrong. running the rc mount/mount after the vfs/refresh seems to invalidate the cache. If we run things like this:
The VFSes are cached so it should pick them up if you reuse them. Should being the operative word! They may also be falling out of the cache after 5 minutes...
You need to replace Encrypt_TD1, Encrypt_TD2 with your remote names and the /mnt/gmedia, /mnt/gmedia2 mount points with whatever directories you want them mounted to.
I've just tried the rc mount command with: /usr/bin/rclone rc mount/mount fs=gmedia_enc2: mountPoint=/mnt/gmedia_2 mountType=mount
Unfortunately I've found a bug because I'm starting the original mount with --rc --rc-addr :5572 when I start another mount it tries to start another rclone instance with rc port 5572.... this sadly fails because the original mount has rc with port 5572.
Here's my systemd file for example:
ExecStart=/usr/bin/rclone mount gmedia_enc: /mnt/gmedia_1
--allow-other
--dir-cache-time 1000h
--log-level INFO
--log-file /opt/rclone/logs/rclone.log
--poll-interval 15s
--umask 002
--user-agent "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36"
--rc
--rc-no-auth
--rc-addr :5572
--vfs-read-chunk-size 32M
--vfs-cache-mode full
--vfs-cache-max-age 336h
--cache-dir /mnt/rclone_cache
ExecStop=/bin/fusermount -uz /mnt/gmedia_1
ExecStop=/bin/fusermount -uz /mnt/gmedia_2
ExecStop=/bin/fusermount -uz /mnt/gmedia_3
ExecStartPost=/usr/bin/rclone rc vfs/refresh recursive=true --rc-addr 127.0.0.1:5572 _async=true
ExecStartPost=/usr/bin/rclone rc mount/mount fs=gmedia_enc2: mountPoint=/mnt/gmedia_2 mountType=mount --rc-addr 127.0.0.1:5572
ExecStartPost=/usr/bin/rclone rc mount/mount fs=gmedia_enc3: mountPoint=/mnt/gmedia_3 mountType=mount --rc-addr 127.0.0.1:5572
I get the following logs: 2020/06/13 14:10:38 Failed to start remote control: start server failed: listen tcp :5572: bind: address already in use
You have something running already I'd surmise as those commands work fine.
rclone --rc --rc-serve --rc-no-auth mount GD: /home/felix/test -v --rc-addr :5573
2020/06/13 10:32:40 INFO : Using --user felix --pass XXXX as authenticated user
2020/06/13 10:32:40 NOTICE: Serving remote control on http://[::]:5573/