2 node HA fail-over storage configuration possible using Rclone?

What is the problem you are having with rclone?

The problem is not directly related to rclone, but a proper setup of rclone might ofer the solution! Please allow me to elaborate shortly and ask if there is something I can do to make this work. Thank you in advance for any help!

What I'm trying to achieve: Gluster replication (or rsync or maybe a different bi-sync solution) between a local ZFS folder on Server1 and a Rclone-Pcloud VFS folder (the tested working command used for mounting is pasted below) on Server2. The end goal is to have some sort of HA storage with off-site fail-over. Is this feasible?

Problem: I didn't manage to mount the Rclone VFS folder into Gluster on Server2 (I even tried to mount --bind 2 separate folders, one for Rclone and one for Gluster and it almost worked!!! I can elaborate further about what I tried if it's useful), but I suspect it might be possible if I (would be smart enough to) configure rclone properly. If Gluster is not the way to go, what else could I try to achieve this 2 server HA solution?

Many thanks for any help!

Run the command 'rclone version' and share the full output of the command.

rclone v1.62.2

  • os/version: debian 11.7 (64 bit)
  • os/kernel: 5.15.107-2-pve (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.20.2
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

pcloud

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone mount pcloud-crypt: /SSD-ZFS/rclone-pcloud/mount --config /root/.config/rclone/rclone.conf --vfs-cache-mode full --vfs-cache-max-size 500G --cache-dir /SSD-ZFS/rclone-pcloud/cache --allow-other

The rclone config contents with secrets removed.

Paste config here

A log from the command with the -vv flag

Paste  log here

instead of DIY HA why not to use proven open source solution like ceph?

Is it possible to somehow "tie" Rclone to Ceph for off-site storage? If so, I'll certainly try this out!!

What I ideally want is to have a large local storage (current hardware: 4 HDDs in ZFS mirrored configuration totaling about 20Tb) on Server1 and remote Rclone storage (using about 500Gb VFS cache of a 1Tb SSD to 10Tb off-site storage) on a very small & light Server2, everything bisynced so that I could shutdown any node (for maintenance or reduced power consumption etc) and everything would continue to work flawlessly!

I will give Ceph a second look to see if such a configuration, or similar, is possible!
Thank you and please allow some follow-up questions if I won't know how to achieve this with Ceph.

Out of the box you can tie Ceph to any S3 cloud

On first glance, I should use at least 3 nodes... I assume there are ways to circumvent these requirements, but before I dive into this completely new territory, would you mind to sketch out a possible working Ceph scenario using the hardware I already have (Server1 with 4 identical HDDs, Server2 with 1Tb SSD for caching to online storage using rclone or something native to Ceph)?

You will find plenty of resources on Internet, e.g.:

Good opportunity to do some research and learning:)

Well, it seems to me that Ceph cannot do what is outlined in this thread. Furthermore, if I would properly implement it, it would mean a lot more new hardware to buy, route that I, sadly, cannot take. Anyway, thank you for your suggestion.

Is the problem highlighted in this thread, namely creating a Gluster brick out of a Rclone mount, related to this other thread regarding the lack of xattr: https://forum.rclone.org/t/questions-regarding-extended-attributes/10553? If so, are there any news in this regard? Will something like this be possible in the future using rclone or rather not?

Thanks for any help!

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.