Trying to figure out if its possible to set a bandwidth limit when using the "check" command? I am using "--bwlimit 112M" which works fine when transferring files with "sync" however it does not appear to work when running the "check" command.
What is your rclone version (output from rclone version)
Which OS you are using and how many bits (eg Windows 7, 64 bit)
Windows 10 64 Bit
Which cloud storage system are you using? (eg Google Drive)
Local storage, LAN.
The command you were trying to run (eg rclone copy /tmp remote:tmp)
For now yes, I am running this on an ESXi server (homelab) and I dont want to fully saturate the networking card and starve the other VMs that are running on the host. I am going to be making some networking changes here soon that will allow for higher bandwidth and for rate limiting on the switch side. For now however a software solution for limiting the amount of bandwidth that the "check" command uses would be great.
Maybe. RClone works fine for local storage at least for my needs and allows for one stop shopping as I can run my local and cloud backups with the same software. Plus really whats the difference between the "cloud" and backing up files to a local server over the network? Not much really... I will implement a hardware limiter on the switch in the coming days and should be good. Thanks though!
If it were me, i'd still use restic with rclone along with VSS on windows (although im not a windows guy).
If you want to use rclone, One simple way to do this though would be to use rclone serve (webdav/http/sftp/etc) on each one of those devices. Then you can use rclone from the central server to copy the data off to our destination using copy/sync (along with a --backup-dir).
restic will just manage those backups far better though. You'd still run a restic serve I believe instead though.
i do something similar.
i use veeam backup software, community edition, to backup virtual servers and computers.
then i use rclone to copy those backups to the cloud using --backup-dir.
for backups of files, including veeam backup files, i have a python script that runs fastcopy, 7zip and use rclone to copy files to the cloud.
as per my rclone wiki, i enable VSS
as for the central server idea, on my backup server, i run the python script in a timed loop.
there is a shared folder that it looks at.
if a local computer, using that same pythong script, wants something done, it will copy a file to that shared folder.
if the python script finds a file in that shared folder, it looks up that filename in a .ini file and runs a script. this is a better solution then rclone serve, as i can run any script.
I am assuming that the idea here is that the rclone would run on say Server A and Server B, lets call them file servers. While rclone would also run on Server C, the backup location. The rclone instance that is running on Server A and Server B would be monitor, process, and cach the files that need to be backed up to Server C. With the main benefit being that this would save on bandwidth for example? As Server C would not have to scan every file on Server A and Server B to preform a backup?
It would allow you to orchestrate from the central server. I use this approach on android boxes for example so I can run things from the central server to access each of the 'clients'. I simply run a rclone serve sftp on each and then from my server I can either read and/or write to those clients. I do this because its really a pain to logon to those android servers and access them in any way remotely. This makes it easy for me. I can add excludes/includes/etc when syncing from the central server rather than visiting each client.
In your example though, if you don't need the central server and you simply want to get backups to the cloud, then you can just run them directly on the client to the cloud or to another server. You can use @asdffdsa idea as well.
rclone doesn't do good 'backups' though (differential type things) so thats why i tend to recommend restic if you want true backups while still using rclone to connect to anything through the restic integration.
I see what you are saying. Rclone serve just provides another method of accessing the files that are stored on a remote location in my example a server. Does not really do anything to increase or decrease performance per say. Another words I could configure all my end points to have rclone server FTP running and from the central server connect to them and copy the data.
the problem with ftp is there is no support for checksums.
not a good choice for backups.
and even if you wanted to backup to ftp server, there are many open-source projects dedicated just to ftp. https://rclone.org/ftp/#checksums
I am not to worried about having true differential backups with versioning for files given my use case. The biggest thing that I need is protection from general hardware failure, accidental deletion, and virus. Along with protection from catastrophic events such as house fire and water damage. Should be pretty easy for me to get that level of protection with rclone.
While the serves are neat I dont think that I will need to use it. Will just add another layer and something else for me to have to fix when it stops working, lol. That being said thanks for the information!