Box not saturating bandwidth

What is the problem you are having with rclone?

Upload to Box.com (via a crypt) not saturating local BW or CPU. See attached screenshot of the network traffic graph:

Run the command 'rclone version' and share the full output of the command.

rclone v1.57.0
- os/version: Microsoft Windows Server 2019 Datacenter 1809 (64 bit)
- os/kernel: 10.0.17763.2366 (x86_64)
- os/type: windows
- os/arch: amd64
- go/version: go1.17.2
- go/linking: dynamic
- go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

Box via crypt

The command you were trying to run (eg rclone copy /tmp remote:tmp)

> .\rclone -v copy 'D:\Downloads\2022 01 05 Photos\' 'Box_crypt:2022 01 05 Photos\'

The rclone config contents with secrets removed.

[Box Business]
type = box
token = {"access_token":"x","token_type":"bearer","refresh_token":"x","expiry":"2022-01-16T00:49:48.1088065Z"}

[Box_crypt]
type = crypt
remote = Box Business:Photos
filename_encryption = off
directory_name_encryption = false
password = 
password2 = 

A log from the command with the -vv flag

hello and welcome to the forum.

rclone is not cpu intensive, no reason for it to saturate that.

about bandwidth, what is the output of a speedtest?
try increasing

  • --transfers
  • --checkers

i see that you are not using a client id with box.
not sure about box, but without a client id, all rclone users on planet earth share the same exact client id.
for example, that applies to gdrive

and some providers put hard limits on upload bandwidth.
have you tested with another providers?
for example, with wasabi, s3 clone, i can easily saturate my 100MB/s internet connection.

yet with onedrive, speeds very depending on time of day and microsoft overall load
during peak hours, average about 15MB/s
not peak hours, 38MB/s.

1 Like

Thanks for the quick reply, asdffdsa!

  1. Measurement Lab's Speed Test yields: 1830Mbps down, 2434Mbps up.
  2. With GDrive and Dropbox I'm able to saturate up to 750mbps for days on end.
  3. I can try a client Id for box. Should I add it to the config directly or will it trigger a reauth with Box?
  4. I played with more transfers but it doesn't help with the janky up-down behavior of the graph you see in the screenshot...

and you do not get that with gdrive and dropbox?

I get a straight line around 1Gbps with GDrive and 750Mbps with Dropbox.

I increased --transfers with --transfers 32 --checkers 8 and did get a sweet bump! but still the sin-wave, albeit with higher peaks and valleys:

I think that should be good enough as I'll probably be done in a few hours (instead of days) now.

With file sizes like yours (10GB per, it seems) what would increase speeds the most is adding --drive-chunk-size 512M or greater. You have plenty of RAM, so go nuts! I use the Rclone defaults for both transfers and checkers (4/8), and I always max out my gig connection. The fewer large files you upload, the higher you want --drive-chunk-size to be (up to what your memory can handle).

1 Like

Thanks for that! I looked into it a bit, but didn't find it possible to use for Box as the backend. Or is it a universal flag for all providers? Box

Ah, yes, doesn't seem to apply to Box. Didn't think about that :slight_smile:

1 Like

good,
many rcloners increase --transfers without realizing the need to increase --checkers

just curious, on that win server, using hard raid or soft raid?

No raid. The machine is a temporary setup just to do some transfers from one cloud provider to another.

Yeah I didn't increase --checkers beyond 8 in my command above because I saw I was transfers-throttled, not checkers. And it was true, that transfer did complete in those couple hours and the checkers were already done in the first ~30mins.

so in the end:
--- which settings worked best?
--- what was the average upload speed for those settings?

Is the flag that worked best! I got peaks and valleys in my traffic as I show in the screenshot above, but I averaged 1.2gbps. I don't think that was saturation, but it was good enough for me to let it be and finish after banging my head at the overall backup project for ~2 weeks :slight_smile:

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.