Bandwidth limit for "check" command?

What is the problem you are having with rclone?

Trying to figure out if its possible to set a bandwidth limit when using the "check" command? I am using "--bwlimit 112M" which works fine when transferring files with "sync" however it does not appear to work when running the "check" command.

What is your rclone version (output from rclone version)

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Windows 10 64 Bit

Which cloud storage system are you using? (eg Google Drive)

Local storage, LAN.

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone.exe check "\XXX.XXX.X.XX\Test" "X:\Test" --progress --transfers=16 --checkers=32 --log-file=log.txt -v --bwlimit 112M

A log from the command with the -vv flag (eg output from rclone -vv copy /tmp remote:tmp)

112M is roughly full gigabit. Is that what you intend to use?

For now yes, I am running this on an ESXi server (homelab) and I dont want to fully saturate the networking card and starve the other VMs that are running on the host. I am going to be making some networking changes here soon that will allow for higher bandwidth and for rate limiting on the switch side. For now however a software solution for limiting the amount of bandwidth that the "check" command uses would be great.

I'd assume you are seeing this issue:

If you are dealing with just local things and no cloud remotes, there are probably better tools than rclone to deal with local storage only.

I use https://syncthing.net/ for my own use prior to leveraging rclone for cloud storage. I'm sure some Windows folks might add others.

Maybe. RClone works fine for local storage at least for my needs and allows for one stop shopping as I can run my local and cloud backups with the same software. Plus really whats the difference between the "cloud" and backing up files to a local server over the network? Not much really... I will implement a hardware limiter on the switch in the coming days and should be good. Thanks though!

for local on windows, fastcopy
15 years old
open source
checksum verification.
no install needed. portable app
easy to script

You can use other solutions that work more at a block level to limit changes/bandwidth/etc and are much more tuned for local storage as rclone is really made for cloud based solutions.

My general thought is always to use what works best for you as I can't make that choice for you. If what you are doing works best for you, keep on using it!

I just try to offer different options or solutions based on my experience and what works better for me in my use cases.

Might want to look at restic for backups. It works well and uses rclone in the backend.

1 Like

Well how about this, I need the following...

  • Something that can access files shared via FreeNAS, Windows, and Raspberry Pie.
  • Need to than be able to copy the change data from the systems above to another device on the network
  • Need to be able run data integrity checks on the data that is copied to confirm that the backed up data is valid and has not corrupted over time.
  • Easy to script out with low overall maintenance
  • Would prefer that the files be stored in their original format (no containers)
  • Would also prefer that the same software be able to upload to offsite cloud locations such as B2

From my research Rclone ticks all of the boxes, but if you guys know of better solutions I am all ears.

Thanks again!

If it were me, i'd still use restic with rclone along with VSS on windows (although im not a windows guy).

If you want to use rclone, One simple way to do this though would be to use rclone serve (webdav/http/sftp/etc) on each one of those devices. Then you can use rclone from the central server to copy the data off to our destination using copy/sync (along with a --backup-dir).

restic will just manage those backups far better though. You'd still run a restic serve I believe instead though.

1 Like

Hmm interesting. What is rclone serve, can you explain that a bit more for me? Thanks.

i do something similar.
i use veeam backup software, community edition, to backup virtual servers and computers.
then i use rclone to copy those backups to the cloud using --backup-dir.

for backups of files, including veeam backup files, i have a python script that runs fastcopy, 7zip and use rclone to copy files to the cloud.
as per my rclone wiki, i enable VSS

as for the central server idea, on my backup server, i run the python script in a timed loop.
there is a shared folder that it looks at.
if a local computer, using that same pythong script, wants something done, it will copy a file to that shared folder.
if the python script finds a file in that shared folder, it looks up that filename in a .ini file and runs a script. this is a better solution then rclone serve, as i can run any script.

You can run rclone serve http for example (there are more flags for authentication/etc) and then on a http port rclone will be serving a http address for the 'local' remote specified. Like this:

rclone serve http /some/data

Then on the central server side, you can create a remote to connect and sync from that http remote on the remote server.

rclone sync that-http-server: gdrive:

https://rclone.org/commands/rclone_serve/

I am assuming that the idea here is that the rclone would run on say Server A and Server B, lets call them file servers. While rclone would also run on Server C, the backup location. The rclone instance that is running on Server A and Server B would be monitor, process, and cach the files that need to be backed up to Server C. With the main benefit being that this would save on bandwidth for example? As Server C would not have to scan every file on Server A and Server B to preform a backup?

It would allow you to orchestrate from the central server. I use this approach on android boxes for example so I can run things from the central server to access each of the 'clients'. I simply run a rclone serve sftp on each and then from my server I can either read and/or write to those clients. I do this because its really a pain to logon to those android servers and access them in any way remotely. This makes it easy for me. I can add excludes/includes/etc when syncing from the central server rather than visiting each client.

In your example though, if you don't need the central server and you simply want to get backups to the cloud, then you can just run them directly on the client to the cloud or to another server. You can use @asdffdsa idea as well.

rclone doesn't do good 'backups' though (differential type things) so thats why i tend to recommend restic if you want true backups while still using rclone to connect to anything through the restic integration.

I see what you are saying. Rclone serve just provides another method of accessing the files that are stored on a remote location in my example a server. Does not really do anything to increase or decrease performance per say. Another words I could configure all my end points to have rclone server FTP running and from the central server connect to them and copy the data.

1 Like

the problem with ftp is there is no support for checksums.
not a good choice for backups.
and even if you wanted to backup to ftp server, there are many open-source projects dedicated just to ftp.
https://rclone.org/ftp/#checksums

1 Like

I'm not sure if any of the serve's support checksums. Its a good point and wasn't needed for my use case.

I am not to worried about having true differential backups with versioning for files given my use case. The biggest thing that I need is protection from general hardware failure, accidental deletion, and virus. Along with protection from catastrophic events such as house fire and water damage. Should be pretty easy for me to get that level of protection with rclone.

While the serves are neat I dont think that I will need to use it. Will just add another layer and something else for me to have to fix when it stops working, lol. That being said thanks for the information!

1 Like