#### What is the problem you are having with rclone?
When running rclone, the entire network is brought to a complete crawl including the device that rclone is on. The WiFi network and connected clients are most affected.
There are no other applications running performing uploads at the same time. The total available upload speed is 400Mbps.
The currently solution has been to reduce the bandwidth consumption to ~6 or 7MB which is equivalent to circa 50Mbps.
#### What is your rclone version (output from rclone version)
rclone v1.53.1
#### Which OS you are using and how many bits (eg Windows 7, 64 bit)
Microsoft Windows 10 Professional version 1909 Build 18636.1110
#### Which cloud storage system are you using? (eg Google Drive)
Microsoft OneDrive
The command you were trying to run (eg rclone copy /tmp remote:tmp)
@Animosity022, yes the current bandwidth for rclone is limited to 6MB. However at 10MB for example, it consumes the entire bandwidth even though the upload is 400Mbps.
@Animosity022, sure if you can let me know the screenshots that would be helpful. What log information is needed as it's limited to events such as multipart start, SHA validation, etc?
There is no network related information at this point unless I am missing a command flag.
@Animosity022, with or without the flag --bwlimit? If without, it'll have to be tested late at night as during the daytime, the network is actively used.
Thanks @Animosity022. When you say that bwlimit only works for actual file copy operations and such, do yo mean copy, sync or move (bidirectional) and not checksum?
If the checksums are calculated locally (source and destination), how are these compared? Is there no data transferred e.g. listing all directories and files?
felix@guardian:~$ rclone copy /etc/hosts GD: -vv --checksum
2020/10/13 15:34:12 DEBUG : rclone: Version "v1.52.3" starting with parameters ["rclone" "copy" "/etc/hosts" "GD:" "-vv" "--checksum"]
2020/10/13 15:34:12 DEBUG : Using config file from "/opt/rclone/rclone.conf"
2020/10/13 15:34:12 DEBUG : fs cache: adding new entry for parent of "/etc/hosts", "/etc"
2020/10/13 15:34:12 DEBUG : hosts: Need to transfer - File not found at Destination
2020/10/13 15:34:13 DEBUG : hosts: MD5 = 3e3007aa5490459a1658d5d31be3a594 OK
2020/10/13 15:34:13 INFO : hosts: Copied (new)
2020/10/13 15:34:13 INFO :
Transferred: 455 / 455 Bytes, 100%, 338 Bytes/s, ETA 0s
Transferred: 1 / 1, 100%
Elapsed time: 1.3s
2020/10/13 15:34:13 DEBUG : 4 go routines active
What happens is the file has a checksum calculated locally. The file is copied. The cloud calculates the checksum on the file. Rclone has both values and compares and they are the same. Once the command is done, you'd repeat as checksums are not stored locally so if you checked again, it would calculate locally and get the sum from the provider since it's part of the metadata.
felix@guardian:~$ rclone copy /etc/hosts GD: -vv --checksum
2020/10/13 15:35:52 DEBUG : rclone: Version "v1.52.3" starting with parameters ["rclone" "copy" "/etc/hosts" "GD:" "-vv" "--checksum"]
2020/10/13 15:35:52 DEBUG : Using config file from "/opt/rclone/rclone.conf"
2020/10/13 15:35:52 DEBUG : fs cache: adding new entry for parent of "/etc/hosts", "/etc"
2020/10/13 15:35:52 DEBUG : hosts: MD5 = 3e3007aa5490459a1658d5d31be3a594 OK
2020/10/13 15:35:52 DEBUG : hosts: Size and MD5 of src and dst objects identical
2020/10/13 15:35:52 DEBUG : hosts: Unchanged skipping
2020/10/13 15:35:52 INFO :
Transferred: 0 / 0 Bytes, -, 0 Bytes/s, ETA -
Checks: 1 / 1, 100%
Elapsed time: 0.0s
2020/10/13 15:35:52 DEBUG : 4 go routines active