B2 Remote Large File >5GB Server-Side Copy Hangs

What is the problem you are having with rclone?

Attempting to run rclone sync from one B2 bucket to another, sync hangs on a file larger than 5GB (requiring multipart upload per B2 docs)

Run the command 'rclone version' and share the full output of the command.

rclone v1.57.0

  • os/version: darwin 12.2 (64 bit)
  • os/kernel: 21.3.0 (x86_64)
  • os/type: darwin
  • os/arch: amd64
  • go/version: go1.17.2
  • go/linking: dynamic
  • go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

B2

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone --fast-list --copy-links --progress --transfers 32 sync b2:bucket1 b2:bucket2 --checkers 32 -vv

A log from the command with the -vv flag

2022-02-04 21:15:37 DEBUG : B2 bucket (redacted): Waiting for transfers to finish
2022-02-04 21:16:59 DEBUG : Projects/datafiles/photos.tar: Done copying chunk 2
2022-02-04 21:18:54 DEBUG : Projects/datafiles/photos.tar: Error copying chunk 1 (retry=true): Post "https://api002.backblazeb2.com/b2api/v1/b2_copy_part": read tcp 192.168.1.233:60621->206.190.215.15:443: i/o timeout: &url.Error{Op:"Post", URL:"https://api002.backblazeb2.com/b2api/v1/b2_copy_part", Err:(*net.OpError)(0xc0007a2050)}
2022-02-04 21:18:54 DEBUG : pacer: low level retry 1/10 (error Post "https://api002.backblazeb2.com/b2api/v1/b2_copy_part": read tcp 192.168.1.233:60621->206.190.215.15:443: i/o timeout)
2022-02-04 21:18:54 DEBUG : pacer: Rate limited, increasing sleep to 20ms
2022-02-04 21:18:54 DEBUG : Projects/datafiles/photos.tar: Copying chunk 1 length 4294967296
Transferred:   	          0 B / 5.915 GiB, 0%, 0 B/s, ETA -
Checks:             59322 / 59322, 100%
Transferred:            0 / 1, 0%
Elapsed time:      7m44.0s
Transferring:
 *             Projects/datafiles/photos.tar:  0% /5.915Gi, 0/s, -

I let this go and keep retrying for about 20 minutes for no success. Is this an issue with serverside copy handling?

hello and welcome to the forum,

can you copy a single file and post the full debug log
rclone --sync b2:bucket1/file b2:bucket2 --progress -vv

Strangely, with a single large file (6GB), it did seem to work this morning. I was sure it was stalled yesterday though - maybe I had too many transfers going?

Completed in 2 chunks

2022-02-05 10:27:01 DEBUG : photos.tar: Need to transfer - File not found at Destination
2022-02-05 10:27:02 DEBUG : photos.tar: Starting copy of large file in 2 chunks (id "redacted")
2022-02-05 10:27:02 DEBUG : photos.tar: Copying chunk 2 length 2056101579
2022-02-05 10:27:02 DEBUG : photos.tar: Copying chunk 1 length 4294967296
2022-02-05 10:29:03 DEBUG : photos.tar: Done copying chunk 2
2022-02-05 10:30:55 DEBUG : photos.tar: Done copying chunk 1
2022-02-05 10:30:55 DEBUG : photos.tar: Finishing large file copy with 2 parts
2022-02-05 10:30:56 DEBUG : photos.tar: sha1 = 1365a27a4bc4f1e5a66c4d6e6fddd47edfe2f79e OK
2022-02-05 10:30:56 INFO  : photos.tar: Copied (server-side copy)
Transferred:   	    5.915 GiB / 5.915 GiB, 100%, 0 B/s, ETA -
Transferred:            1 / 1, 100%
Elapsed time:      3m55.7s

have you have run that same command many times and yesterday was the first time getting those errors?
if true, then perahps was a one time glitch with network access

This should work...

Remember that it is actually copying > 5 GB of data about the place so it can take some time and the time will vary according to how loaded the backblaze host is.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.