I'm doing system backups to a mounted drive using rclone to mount my Storj (s3 compatible) bucket to my desktop and have my backup program (EaseUS Todo backup) write to that folder so that it goes into the cloud.
I have no issues writing to this folder and having the backup appear in my cloud bucket. However, I am unable to read and open the large file without caching the entire thing (which is not viable for my operation, and would defeat the purpose). I need to be able to both write to the cloud and recover the backup from the cloud as well through Easeus. Currently, Easeus gets stuck trying to open the backup for recovery, which I believe to be because it is trying to download the whole file to the cache. Easeus supports other cloud providers like dropbox, gdrive and its own cloud server for backup and recovery, but rclone mount is having trouble reading such large files.
The command I am currently running is as follows:
' rclone mount remote1: "C:\Users...\Desktop\mount" --vfs-cache-mode off '
So my question is, for large files like computer backups that need to be accessed (files can range from 15GB - 1TB+), is there an optimal way to configure rclone to read these files? I have seen similar forum posts suggesting combinations of using flags like --buffer-size , --transfers and --vfs-read-ahead, but I am still new to rclone and needed some more specific pointers to be able to read the backup file from the cloud and recover the computer with it, without downloading the entire file to the local disk cache.
Thanks, and let me know if there are anymore specific details I need to provide.
For the start change caching mode to:
it will cache files for both reads and writes.
Reads are done in chunks as needed - you can tweak it:
--vfs-read-chunk-size 128M is default
and to limit disk space used by cache:
--vfs-cache-max-size 40G default is unlimited
I would also change cache max age from default 1h to something much higher:
assuming that you are only using this one machine for backups to the same files in the cloud.
If you are just reading from the file then
--vfs-cache-mode off should be fine. If the app repeatedly reads part of the file then using
--vfs-cache-mode full will improve performance.
However if the app writes the to the file then it will need to be entirely cached on disk.
It depends exactly what it is doing here.
If it is only reading from the file then what it is probably doing is reading parts of the file throughout the file. These seeks can take a long time on cloud storage. A seek on SSD typically takes 0.1ms or less, on HDD about 10ms and on cloud storage more like 500ms to 1s.
However it might be reading the whole file to check the checksum (for example) in which case there is no way round the problem.
The rclone log with
-vv will tell you what is going on.
i do something similar with veeam backup and replication and instant restore.
--vfs-cache-mode=full, veeam can read the backup file no problem, but super slow, not practical.
and a lot depends on your internet connection and round-trip latency.
you can verify that.
file.ext, total file size is 700MiB but rclone has only download 17MiB.
from windows explorer, from the mount point, shows the size on disk as 700MiB, which is not correct
here is the same info but from the rclone vfs file cache. notice the
Size on disk
Thank you, everyone. I'm going to implement some of the suggestions from this thread and report back.
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.