Rclone setup for a mixed local+cloud NAS storage system (sync/backup, access via synology SMB)

What is the problem you are having with rclone?

Hey there! This post is not about a problem but rather looking for advice using rclone. My 20 people company has ~250TB capacity split across multiple (3 main ones) RAID6 volumes on Synology NAS units (for historical and convenience reasons), synced over Backblaze B2 cloud storage via Synology Cloud Sync.

We are looking to migrate to a mixed local+cloud system where we would have a gigantic All_Projects root folder on one local NAS (an existing of our Synology units or a new dedicated unit), in which:

  • the projects folders on the NAS local storage would only be these currently being processed by our team, and for which deliverables have not been produced yet or less than 6 months ago.
  • some sub-projects folders could be flagged as "to-be-archived" to be stored only on the cloud storage and removed from the local storage of the NAS
  • from time to time some projects on the cloud could be flagged for downloading from the cloud to the local NAS again for some further re-processing.

I've posted this request on the backblaze reddit and someone suggested rclone could be used for this, hence this post. Ideally, we are looking for a system where:

  • we could access the NAS folders via SMB on windows explorer (both local folders and remote-only ones on b2). Is union remote or mergerfs the way to do it?
  • anyone could decide (after some teaching, not only IT people) which folders to move to the cloud only, while all the others would remain synced. Could be really similar to Mountain Duck UI on windows/mac, ExpanDrive server, or the classic dropbox and google drive utilities. rclone-browser seem like the right tool for setting this up.
  • a UI could help monitor the sync state for all the folders, plus standard metrics like used capacity of local/cloud page and repo
  • Although Synology Cloud Sync does no versioning (hence some attack vector for ransomware on our files) Backblaze does file versioning. Using rcloud -sync --backup-dir or a daily rcloud -copy to ensure versioning would probably be the way to go as shown in this thread
  • bonus point if the system can be setup on synology - which we can manage via SSH anyway.

Which cloud storage system are you using? (eg Google Drive)

Backblaze B2 Cloud Storage
By the way, is there a more recent overview of preferred cloud storage providers among rclone users? Backblaze seem like the cheapest for storage (incurring some fee for download), but a lot of people on the rclone forums seem to rely on google or dropbox - what's the typical price point for these?

Thanks a lor for your time!

hello and welcome to the forum,

hi, that was me on that thread.

yes, a rclone mount running on a synbox can be shared over SMB.
over the local lan and over a vpn such as openvpn or tailscale.
I have done that.

rclone is a command line app, runs fine over ssh.
and if you want to use the rclone gui, that should also run over ssh.

could also use a rclone mount to move data between local and cloud.
in that way, could use windows explorer.
rclonebrowser is also a good choice.
imho, not going to want to mix rclone and the utilities from the cloud provider.

every use case is different.
--- I keep recent data in wasabi, s3 clone known for hot storage, $6.00/TB/month
no charge for api calls
no charge for downloads
has mandatory retention period.
--- i keep archived data in aws s3 deep glacier, $1.00/TB/month

Hey asdffdsa, and thanks for the fast reply - plus the helpful insights on the other thread!

  1. Thanks for confirming rclone (+gui) can run on a synobox and that an rclone mount on the synbox can be shared over SMB
  2. The idea to have
  • one B2 bucket which is unlilmited in size which would be the archive of all our projects folders,
  • plus the local volume itself on the NAS (which would also be synced/backed-up to another bucket) is interesting.

A few question then:

  • when doing so and sharing the mount over SMB, doing a copy via the windows explorer from A to B on another computer, would it be executed on the NAS directly? Or would the files be downloaded from the NAS to the windows computer where the copy runs and then from the computer to the cloud (or vide versa)?
  • Would rclone-browser let me manage the copy from the NAS volume to the mounted cloud volume itself?
  1. I've read on some forums that wasabi has lower bandwidth than backblaze, did you experience that? And the S3 Glacier Deep Archive backup via rclone is also really interesting. The doc states that rclone only speaks the S3 API it does not speak the Glacier Vault API, so rclone cannot directly access Glacier Vaults, and you have first to restore a folder/file before accessing it via rclone backend restore s3:bucket/path/to/directory [-o priority=PRIORITY] [-o lifetime=DAYS]. Once the restore has been done and the period for restore is over, how can you access the folder via rclone? Would that involve a new mount?

Finally, having a single cloud bucket which would be both the cloud-only archive and the current projects folders sync could be more straightforward (set-it-and-forget-it), less prone to errors from the team, and might help avoid some merge conflicts. Hence I'd like to dig a little deeper to try to see if I can do this without having that separated bucket. Have you used the union remote for rclone? Do you know how folders are differentiated between one mount and another with the union remote? Plus when someone creates a new folder via the explorer, how is it chosen where it gets created (on which mount)?

And finally, is there a GUI for rclone which is as user-friendly as cyberduck/mountainduck or filezilla pro to handle mounts, options, sync tasks, etc?

Thanks again for your time,

the synbox share is shared over SMB. so the behavior would be the same, if sharing a local folder on the synbox.
the difference is that
if the file in not already in the rclone local vfs file cache,
then rclone would have to download it into that cache and then the client computer could access it.
as a side not, can run tailscale vpn on the synbox, which is what i do.
that way, remote employees can access the SMB share.

well, you would use rclonebrowser to copy direct to cloud,
not sure the logic to use it to copy to the local rclone mount.

well, in all my testing, on the whole, nothing is faster than wasabi.
no issue to saturate 1Gbps internet connection.
but the real advantage is with api calls, little to none throttling, nothing i have tested and nothing i have seen into forum comes close.
for example
https://forum.rclone.org/t/fastest-way-to-check-for-changes-in-2-5-million-files/25957/11

the problem with backbalze s3, is it lacks support for MFA.
for example,
if someone got hold of your rclone config file, or the s3 keys,
then they could access all the files in backblaze, could delete all the files, could ransomware them.
with MFA, that is not possible.
in my case, for every bucket, there is a IAM user and for that IAM user, MFA is enabled.
so even rclone, on its own, cannot read/delete the files.

i have never used a vault, just regular buckets.
just need to set https://rclone.org/s3/#s3-storage-class in the command or in the remote config file

that applies to any s3 provider or other providers such as google cloud storage
glacier storage only makes sense to long term archive.
I use it mostly to store old veeam backup images, never plan to need them.

once restored the files, they are in the same bucket as before.
can use any rclone command.
no new mount should be needed.

never used it. tho many other rcloners do.

only GUI for rclone, that i know about it rclonebrowser.

the advantage of using s3, there are a huge number of tools.
i use the paid versions of cloudberry explorer and s3browser.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.