the synbox share is shared over SMB. so the behavior would be the same, if sharing a local folder on the synbox.
the difference is that
if the file in not already in the rclone local vfs file cache,
then rclone would have to download it into that cache and then the client computer could access it.
as a side not, can run tailscale vpn on the synbox, which is what i do.
that way, remote employees can access the SMB share.
well, you would use rclonebrowser to copy direct to cloud,
not sure the logic to use it to copy to the local rclone mount.
well, in all my testing, on the whole, nothing is faster than wasabi.
no issue to saturate 1Gbps internet connection.
but the real advantage is with api calls, little to none throttling, nothing i have tested and nothing i have seen into forum comes close.
for example
https://forum.rclone.org/t/fastest-way-to-check-for-changes-in-2-5-million-files/25957/11
the problem with backbalze s3, is it lacks support for MFA.
for example,
if someone got hold of your rclone config file, or the s3 keys,
then they could access all the files in backblaze, could delete all the files, could ransomware them.
with MFA, that is not possible.
in my case, for every bucket, there is a IAM user and for that IAM user, MFA is enabled.
so even rclone, on its own, cannot read/delete the files.
i have never used a vault, just regular buckets.
just need to set https://rclone.org/s3/#s3-storage-class in the command or in the remote config file
that applies to any s3 provider or other providers such as google cloud storage
glacier storage only makes sense to long term archive.
I use it mostly to store old veeam backup images, never plan to need them.
once restored the files, they are in the same bucket as before.
can use any rclone command.
no new mount should be needed.
never used it. tho many other rcloners do.
only GUI for rclone, that i know about it rclonebrowser.
the advantage of using s3, there are a huge number of tools.
i use the paid versions of cloudberry explorer and s3browser.