The problem is not directly related to rclone, but a proper setup of rclone might ofer the solution! Please allow me to elaborate shortly and ask if there is something I can do to make this work. Thank you in advance for any help!
What I'm trying to achieve: Gluster replication (or rsync or maybe a different bi-sync solution) between a local ZFS folder on Server1 and a Rclone-Pcloud VFS folder (the tested working command used for mounting is pasted below) on Server2. The end goal is to have some sort of HA storage with off-site fail-over. Is this feasible?
Problem: I didn't manage to mount the Rclone VFS folder into Gluster on Server2 (I even tried to mount --bind 2 separate folders, one for Rclone and one for Gluster and it almost worked!!! I can elaborate further about what I tried if it's useful), but I suspect it might be possible if I (would be smart enough to) configure rclone properly. If Gluster is not the way to go, what else could I try to achieve this 2 server HA solution?
Many thanks for any help!
Run the command 'rclone version' and share the full output of the command.
os/version: debian 11.7 (64 bit)
os/kernel: 5.15.107-2-pve (x86_64)
Which cloud storage system are you using? (eg Google Drive)
The command you were trying to run (eg rclone copy /tmp remote:tmp)
rclone mount pcloud-crypt: /SSD-ZFS/rclone-pcloud/mount --config /root/.config/rclone/rclone.conf --vfs-cache-mode full --vfs-cache-max-size 500G --cache-dir /SSD-ZFS/rclone-pcloud/cache --allow-other
Is it possible to somehow "tie" Rclone to Ceph for off-site storage? If so, I'll certainly try this out!!
What I ideally want is to have a large local storage (current hardware: 4 HDDs in ZFS mirrored configuration totaling about 20Tb) on Server1 and remote Rclone storage (using about 500Gb VFS cache of a 1Tb SSD to 10Tb off-site storage) on a very small & light Server2, everything bisynced so that I could shutdown any node (for maintenance or reduced power consumption etc) and everything would continue to work flawlessly!
I will give Ceph a second look to see if such a configuration, or similar, is possible!
Thank you and please allow some follow-up questions if I won't know how to achieve this with Ceph.
On first glance, I should use at least 3 nodes... I assume there are ways to circumvent these requirements, but before I dive into this completely new territory, would you mind to sketch out a possible working Ceph scenario using the hardware I already have (Server1 with 4 identical HDDs, Server2 with 1Tb SSD for caching to online storage using rclone or something native to Ceph)?
Well, it seems to me that Ceph cannot do what is outlined in this thread. Furthermore, if I would properly implement it, it would mean a lot more new hardware to buy, route that I, sadly, cannot take. Anyway, thank you for your suggestion.