Distributed rclone instances on local network

I have a situation where I have one server (linux) running rclone with a list of cloud remotes, caches for those remotes,local backend, and unionfs from for when I am trying to determine if I have duplicates or writing to those remotes. rclone mount is running as part of systemd to expose the mounts to the OS. and some management applications; But, I also need to expose the (linux rclone server) remote caches: and linux local: unionfs to another server (windows) in my local network, where I am running a plex server. I am trying to figure out how to best mount these caches remotely from the windows server.

ex: of linux remote:

[uni_anime]
type = union
remotes = google_cache_anime:/anime gdrive2-cache:/anime local:/mnt/local/Media/anime

[google]
type = drive
client_id = .apps.googleusercontent.com
client_secret =
scope = drive
token =
root_folder_id =

[google_cache_anime]
type = cache
remote = google:/anime
plex_url = https://.plex.direct:32400
plex_username =
plex_password =
info_age = 48h
chunk_size = 128M
chunk_total_size = 4T
chunk_path = /mnt/cache/rclone/chunk
db_path = /home/user/.cache/rclone/cache-backend/goog_anime/
db_wait_time = 6
chunk_clean_interval = 1000h
tmp_upload_path = /mnt/cache/rclone/tmp
tmp_wait_time = 4w
read_retries = 2
writes = false
workers = 3

my systemd service file:

/usr/bin/rclone mount uni_anime: /mnt/unionfs/anime --rc --rc-addr=10.10.10.10:5400 --allow-non-empty --allow-other --dir-cache-time=2h --max-read-ahead=128k --no-checksum --use-mmap --buffer-size=2G --drive-skip-gdocs --poll-interval=15m --attr-timeout=5s --vfs-read-chunk-size=32M --vfs-read-chunk-size-limit=14T --vfs-cache-max-age=336h --vfs-cache-mode=minimal --config=/home/user/.config/rclone/rclone.conf --log-file=/home/user/logs/anime.log

I am looking for a solution where I can mount the remote caches:/path and local:/path using a remote windows rclone mount.

I though I could use --rc-serve in my systemd file -- but I have no idea how to mount it on the windows side - is it the same as backend?

I guess I could also create systemd services running "rclone serve uni_anime :" for the unions of my choice, and if I have multiple plex servers, I would need multiple caches (to correspond with workers for each cache) --
but .. how do I configure the rclone serve to make sure my chunk and cache performance are not negatively impacted? when a library scan is taking place, I don't want rclone to download the entire file each time; - looking at documentation, most of the serves allow for file seeking - so I assume when I add that remote on the windows host, I can also use vfs with windows rclone mount and it would work?

.. also as a sanity check - is there anything wrong with my remote cache config or rclone mount flags that I am using for my mount? I mean, it works now - but I am not sure if what I am using makes sense.

You could also serve the data over smb/samba which might work OK..

1 Like

wow, after 2 minutes of configuring samba; smb access to rclone works very well.

Sorry for overthinking this..

1 Like

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.