Rclone SMB to Box

What is the problem you are having with rclone?

I have transferred a bit over a terabyte with rclone to Box and while the speeds were not amazing, they were decent. In the last 2 days the speeds have dropped to a point that it is unusable. The speeds start around 1 MB/s and drop to almost 0 in the next few minutes. I also tried a sync from a local directory as a test to make sure the smb source was not the issue. I ran into the same transfer speeds. The connect is 100 Mb/s up and down.

Run the command 'rclone version' and share the full output of the command.

clone v1.60.1

  • os/version: Microsoft Windows 10 Pro 21H1 (64 bit)
  • os/kernel: 10.0.19043.1766 (x86_64)
  • os/type: windows
  • os/arch: amd64
  • go/version: go1.19.3
  • go/linking: static
  • go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

The source storage is a Buffalo Terastation that I access via smb.
The Destination is Box

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone sync -P Tera:hr Box:hr --transfers 32 --checkers 8

rclone sync -P Tera:hr Box:hr

The rclone config contents with secrets removed.

[Tera]
type = smb
host = x.x.x.x
user = admin
pass = **************

[Box]
type = box
token = {"access_token":"*******************","token_type":"bearer","refresh_token":"**************************","expiry":"2022-12-15T09:46:54.206422-08:00"}

A log from the command with the -vv flag

Transferred:        1.448 MiB / 1.076 GiB, 0%, 0 B/s, ETA -234y37w5d12h10m15s128ms
Checks:             39520 / 39520, 100%
Transferred:           88 / 10100, 1%
Elapsed time:     25m12.3s

Transferred:       58.754 MiB / 3.913 GiB, 1%, 18.306 KiB/s, ETA 2d13h20m38s
Transferred:          144 / 10156, 1%
Elapsed time:      3m40.8s

Hi Simplegreen21,

Perfect troubleshooting, now it would help a lot to see a full debug log from something like this:

rclone sync --ignore-times C:/User/you/folder/with/4/large/files Box:testfolder --log-file=debuglog.txt  -P -vv

Normally I would be happy to include a log file, but the log contains the file names for all the transfers and given that this is a client of mine I do not want to upload that. Is there something specific from the log file that you are looking for?

Thank you!

I understand, but you can select any 4 large files you like, or use rclone to make them. I just need the log from the proposed command copying these.

Feel free to redact sensitive info like user name/path in log file - just mark so we can see you did it.

Interestingly when I ran a sync with three 200 MB files they all transferred at around 10 MB/s. The source has thousands of smaller files. Even when I try to increases the transfers it hangs up.

2022/12/15 13:28:26 DEBUG : rclone: Version "v1.60.1" starting with parameters ["rclone" "sync" "--ignore-times" "C:\\test\\testupload" "Box:test" "--log-file=debuglog.txt" "-P" "-vv"]
2022/12/15 13:28:26 DEBUG : Creating backend with remote "C:\\test\\testupload"
2022/12/15 13:28:26 DEBUG : Using config file from "C:\\Users\\********\\AppData\\Roaming\\rclone\\rclone.conf"
2022/12/15 13:28:26 DEBUG : fs cache: renaming cache item "C:\\test\\testupload" to be canonical "//?/C:/test/testupload"
2022/12/15 13:28:26 DEBUG : Creating backend with remote "Box:test"
2022/12/15 13:28:28 DEBUG : test2: Multipart upload session started for 13 parts of size 16Mi
2022/12/15 13:28:28 DEBUG : test2: Uploading part 1/13 offset 0/200Mi part size 16Mi
2022/12/15 13:28:28 DEBUG : test2: Uploading part 2/13 offset 16Mi/200Mi part size 16Mi
2022/12/15 13:28:28 DEBUG : test2: Uploading part 3/13 offset 32Mi/200Mi part size 16Mi
2022/12/15 13:28:28 DEBUG : test3: Multipart upload session started for 13 parts of size 16Mi
2022/12/15 13:28:28 DEBUG : test2: Uploading part 4/13 offset 48Mi/200Mi part size 16Mi
2022/12/15 13:28:28 DEBUG : test1: Multipart upload session started for 13 parts of size 16Mi
2022/12/15 13:28:34 DEBUG : test3: Uploading part 1/13 offset 0/200Mi part size 16Mi
2022/12/15 13:28:34 DEBUG : test2: Uploading part 5/13 offset 64Mi/200Mi part size 16Mi
2022/12/15 13:28:34 DEBUG : test1: Uploading part 1/13 offset 0/200Mi part size 16Mi
2022/12/15 13:28:35 DEBUG : test3: Uploading part 2/13 offset 16Mi/200Mi part size 16Mi
2022/12/15 13:28:36 DEBUG : box root 'test': Waiting for checks to finish
2022/12/15 13:28:36 DEBUG : box root 'test': Waiting for transfers to finish
2022/12/15 13:28:39 DEBUG : test2: Uploading part 6/13 offset 80Mi/200Mi part size 16Mi
2022/12/15 13:28:41 DEBUG : test1: Uploading part 2/13 offset 16Mi/200Mi part size 16Mi
2022/12/15 13:28:41 DEBUG : test3: Uploading part 3/13 offset 32Mi/200Mi part size 16Mi
2022/12/15 13:28:41 DEBUG : test2: Uploading part 7/13 offset 96Mi/200Mi part size 16Mi
2022/12/15 13:28:43 DEBUG : test1: Uploading part 3/13 offset 32Mi/200Mi part size 16Mi
2022/12/15 13:28:46 DEBUG : test3: Uploading part 4/13 offset 48Mi/200Mi part size 16Mi
2022/12/15 13:28:47 DEBUG : test2: Uploading part 8/13 offset 112Mi/200Mi part size 16Mi
2022/12/15 13:28:48 DEBUG : test1: Uploading part 4/13 offset 48Mi/200Mi part size 16Mi
2022/12/15 13:28:48 DEBUG : test3: Uploading part 5/13 offset 64Mi/200Mi part size 16Mi
2022/12/15 13:28:51 DEBUG : test2: Uploading part 9/13 offset 128Mi/200Mi part size 16Mi
2022/12/15 13:28:53 DEBUG : test1: Uploading part 5/13 offset 64Mi/200Mi part size 16Mi
2022/12/15 13:28:54 DEBUG : test3: Uploading part 6/13 offset 80Mi/200Mi part size 16Mi
2022/12/15 13:28:54 DEBUG : test2: Uploading part 10/13 offset 144Mi/200Mi part size 16Mi
2022/12/15 13:28:56 DEBUG : test1: Uploading part 6/13 offset 80Mi/200Mi part size 16Mi
2022/12/15 13:28:58 DEBUG : test3: Uploading part 7/13 offset 96Mi/200Mi part size 16Mi
2022/12/15 13:29:00 DEBUG : test2: Uploading part 11/13 offset 160Mi/200Mi part size 16Mi
2022/12/15 13:29:01 DEBUG : test1: Uploading part 7/13 offset 96Mi/200Mi part size 16Mi
2022/12/15 13:29:01 DEBUG : test3: Uploading part 8/13 offset 112Mi/200Mi part size 16Mi
2022/12/15 13:29:03 DEBUG : test2: Uploading part 12/13 offset 176Mi/200Mi part size 16Mi
2022/12/15 13:29:06 DEBUG : test1: Uploading part 8/13 offset 112Mi/200Mi part size 16Mi
2022/12/15 13:29:07 DEBUG : test3: Uploading part 9/13 offset 128Mi/200Mi part size 16Mi
2022/12/15 13:29:08 DEBUG : test2: Uploading part 13/13 offset 192Mi/200Mi part size 16Mi
2022/12/15 13:29:08 DEBUG : test1: Uploading part 9/13 offset 128Mi/200Mi part size 16Mi
2022/12/15 13:29:11 DEBUG : test3: Uploading part 10/13 offset 144Mi/200Mi part size 16Mi
2022/12/15 13:29:11 DEBUG : test1: Uploading part 10/13 offset 144Mi/200Mi part size 16Mi
2022/12/15 13:29:12 DEBUG : test3: Uploading part 11/13 offset 160Mi/200Mi part size 16Mi
2022/12/15 13:29:12 DEBUG : test2: sha1 = 49c9c898c203286c67ce2ef80f8bbb88f222fcfd OK
2022/12/15 13:29:12 INFO  : test2: Copied (new)
2022/12/15 13:29:16 DEBUG : test1: Uploading part 11/13 offset 160Mi/200Mi part size 16Mi
2022/12/15 13:29:17 DEBUG : test3: Uploading part 12/13 offset 176Mi/200Mi part size 16Mi
2022/12/15 13:29:17 DEBUG : test1: Uploading part 12/13 offset 176Mi/200Mi part size 16Mi
2022/12/15 13:29:18 DEBUG : test3: Uploading part 13/13 offset 192Mi/200Mi part size 16Mi
2022/12/15 13:29:19 DEBUG : test1: Uploading part 13/13 offset 192Mi/200Mi part size 16Mi
2022/12/15 13:29:24 DEBUG : test3: commit multipart upload failed 1/100 - trying again in 4 seconds (not ready yet)
2022/12/15 13:29:25 DEBUG : test1: commit multipart upload failed 1/100 - trying again in 4 seconds (not ready yet)
2022/12/15 13:29:29 DEBUG : test3: sha1 = 49c9c898c203286c67ce2ef80f8bbb88f222fcfd OK
2022/12/15 13:29:29 INFO  : test3: Copied (new)
2022/12/15 13:29:30 DEBUG : test1: sha1 = 49c9c898c203286c67ce2ef80f8bbb88f222fcfd OK
2022/12/15 13:29:30 INFO  : test1: Copied (new)

This indicates you have hit the Box rate limit on creation of new files, Box has a rate limit of 240 files per minute per user, that is 4 files per second.

Here is the list of Box rate limits:
https://developer.box.com/guides/api-calls/permissions-and-errors/rate-limits/

I suggest you take a look at all my responses in this thread where I describe the same phenomena in Google Drive. Just note that the Google Drive specific tuning parameters (--drive-*) will not help on Box.

The best overall advice is to have lots of patience and avoid restarting the job too often.

You may see better results by

  • using the default --transfers of 4 to avoid activating additional Box throttling
  • increasing --checkers to 16 to speed up the checks when restarting
  • mixing the transfer of small and large files by adding --order-by="size,mixed,75"

Thank you so much! I was looking for limits, but did not come across the files per minute limit. I tested the switches you mentioned, but was still getting questionable speeds. So far the the command with the best speed is below. I'm not sure why that one works well, maybe the logging is slowing it down a little so it doesnt hit the limit as often. Thank you again.

rclone sync --ignore-times Tera:purchasing Box:purchasing --log-file=debuglog.txt  -P -vv

That is a great page. Maybe we should link it from the box docs?

Perhaps you are seeing better upload speeds because --ignore-times forces an unconditional transfer even if the file is already present at the target. It isn't a tuning parameter, I just added it to make it possible to rerun the test command with different parameters if needed.

So it may make the upload speed look great, but increase your overall job time if some of the files already exist at the target, e.g. after a restart. I therefore suggest this more robust and less verbose command for your next transfer:

rclone sync Tera:xxxx Box:xxxx --log-file=infolog.txt -P -v

Good idea, PR ready for review:

1 Like

Merged just in time for 1.61 - thank you :slight_smile:

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.