Is rclone more efficient than an nfs mount

So currently I use rclone to open ~1.3k files across 8 servers, and to scale up more I have been thinking in moving to local storage.

Assuming one server with 10 gbps, serving as an backend to 8 or more servers and directly connected, with a raid of 10 hdds (for enough read performance) would that be more efficient or faster than rclone?

Efficient is a tough word for me there as what you are comparing is different.

In general, local storage should always be faster since you are removing that distance from the cloud to you.

I have a great connection and I peer well to my Drive API and I get about 8-10ms to the endpoint. With local storage, you remove that entire part of every transaction so it's going to be a whole different game in terms of speed.

We used to see that when you moved part of an application out of a data center to another data center.

NFS is built to be just that and I would say it's definitely more efficient at sharing files as it's grown over the years and really, that's all it is supposed to do so it does do it very well. I can't say I've touched a NFS solution in years though as most times, it was replaced with a CIFS/SMB solution since Windows was involved.

Connecting to someone else for storage, you also lose the cost for power, cooling, redundancy, etc but if the cost of all that is not a barrier, I'd pick NFS and use local storage.

hmm ok. As someone that never used any NFS or anything like this but just rclone...

Is it really crazy to try moving to a non-cloud solution... I really like things that rclone have, like the chunk reads, bandwidth control, etc...

Lots of things I wouldn't have with NFS to the point I was just considering in using local storage with a rclone serve http, and have the clients connect still using rclone...

Would that kill performance a lot? or should still have improvements over a rclone cloud backend?

I moved to cloud as the cost for powering the home storage was becoming a bit cumbersome along with the maintenance behind it.

My gut tells me you'd be fine with running rclone serve http as it might be as efficient as NFS, I don't think that difference in performance would matter much. If you can handle the latency of cloud storage in your needs now, using NFS or rclone on local storage would both be leaps and bounds better.

Your scale is a bit different though so some validation / testing would make sense.

I'll run some benchmarks when I have enough disk speed to saturate a 10 Gbps uplink...

As far I know, nfs doesn't have async reading, chunk reads and all this stuff that really speed up things right?

My biggest fear when using NFS is that there's nothing there to make sure each file is being read at sufficient speeds. Sure I can saturate the connection between the backend and the load-balancer server using NFS probably...but with 300+ files open I have no idea how NFS would handle it...

If you were to use rclone with local storage + serving it via http, what flags would you change? If you had no API limits anymore

My wife started complaining about the noise and power bill of all the servers at home which started me off on the rclone quest many years ago!


This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.