rclone is not cpu intensive, no reason for it to saturate that.
about bandwidth, what is the output of a speedtest?
try increasing
--transfers
--checkers
i see that you are not using a client id with box.
not sure about box, but without a client id, all rclone users on planet earth share the same exact client id.
for example, that applies to gdrive
and some providers put hard limits on upload bandwidth.
have you tested with another providers?
for example, with wasabi, s3 clone, i can easily saturate my 100MB/s internet connection.
yet with onedrive, speeds very depending on time of day and microsoft overall load
during peak hours, average about 15MB/s
not peak hours, 38MB/s.
With file sizes like yours (10GB per, it seems) what would increase speeds the most is adding --drive-chunk-size 512M or greater. You have plenty of RAM, so go nuts! I use the Rclone defaults for both transfers and checkers (4/8), and I always max out my gig connection. The fewer large files you upload, the higher you want --drive-chunk-size to be (up to what your memory can handle).
No raid. The machine is a temporary setup just to do some transfers from one cloud provider to another.
Yeah I didn't increase --checkers beyond 8 in my command above because I saw I was transfers-throttled, not checkers. And it was true, that transfer did complete in those couple hours and the checkers were already done in the first ~30mins.
Is the flag that worked best! I got peaks and valleys in my traffic as I show in the screenshot above, but I averaged 1.2gbps. I don't think that was saturation, but it was good enough for me to let it be and finish after banging my head at the overall backup project for ~2 weeks