I'm new to rclone, and just wondering if this is feasible: I have a local server with limited disk space(only about 1TB storage). With a 500Mbit connection to a remote server with SSH/SFTP. The remote server has plenty of disk space 10TB or greater. I want to mount the remote server storage using SSH/SFTP to the local server, and then run various processes on the local server that write large amounts of data to the mounted remote. Is remote file system mounting using rclone efficient and reliable enough to do this type of thing? The speed at which the large amounts of data is written to the remote filesystem doesn't really matter to me, but more important is the reliability.
Run the command 'rclone version' and share the full output of the command.
rclone v1.53.3-DEV
os/arch: linux/amd64
go version: go1.18
Which cloud storage system are you using? (eg Google Drive)
i use rclone mount to point to a sftp server, a hetzner storage box.
tho i do not know your exact use-case, the odds are, need to use --vfs-cache-mode=full
which download/cache some of the data from the sftp server to your local machine.
so might still hit a hard limit of free space on local.
the only way to know is for you to run some tests.
Ok. I will update the version of rclone and run some tests. I was told the only connection types I'm allowed to use are SSH to the remote server. So it is really just a project/security constraint. Although I'm open to other options that are better for mounting SFTP. I'm not really aware of the different options. Can you recommend any alternative ways to mount SFTP using linux?