B2 bucket-to-bucket copy seems to use lots of bandwidth, even though B2 claims the API integration uses server-side copy

What is the problem you are having with rclone?

When I use rclone copy to transfer between two buckets on B2 cloud storage, massive amounts of our bandwidth seem to be used. Stopping my copy script correlates with me being able to browse the web smoothly, and one of our IT specialists complained about not being able to download an Ubuntu image (and those complaints went away when I stopped copying). This article (B2 Copy File: Enabling Synthetic Backup and Bucket to Bucket Copies) lists rclone as one of the third party integrations with a server-side copy API that B2's team claims to have worked with. I had to set the --bwlimit option in order to prevent our data center from slowing to a crawl.

Run the command 'rclone version' and share the full output of the command.

rclone v1.69.0
- os/version: ubuntu 24.04 (64 bit)
- os/kernel: 6.8.0-51-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.23.4
- go/linking: static
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Backblaze B2

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy "ManagedFile:ManagedFile/ABC" "ManagedFileS3:ManagedFileS3/ABC" \
 --config /path/to/rclone.conf \
 --transfers 128 \
 --bwlimit 15M \
 --progress

Where "ABC" is an account-based directory that changes with each account. The copying itself works fine, it just eats all of our bandwidth.

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

[ManagedFile]
type = b2
account = XXX
key = XXX

[ManagedFileS3]
type = b2
account = XXX
key = XXX

A log from the command that you were trying to run with the -vv flag

2025/01/25 16:48:15 INFO  : Starting bandwidth limiter at 15Mi Byte/s
2025/01/25 16:48:15 DEBUG : rclone: Version "v1.69.0" starting with parameters ["rclone" "copy" "ManagedFile:ManagedFile/ABC" "ManagedFileS3:ManagedFileS3/ABC" "--config" "/path/to/rclone.conf" "--transfers" "128" "--bwlimit" "15M" "--progress" "-vv"]
2025/01/25 16:48:15 DEBUG : Creating backend with remote "ManagedFile:ManagedFile/ABC"
2025/01/25 16:48:15 DEBUG : Using config file from "/path/to/rclone.conf"
2025/01/25 16:48:15 DEBUG : Creating backend with remote "ManagedFileS3:ManagedFileS3/ABC"

I can see that it is using b2_upload_file from the log here, instead of the correct API, b2_copy_file:

2025/01/25 16:50:24 DEBUG : pacer: low level retry 1/1 (error Post "https://pod-020-3012-06.backblaze.com/b2api/v1/b2_upload_file/d847642d7cc4969a92470a1a/c002_v0203012_t0047"

welcome to the forum,

might try --server-side-across-configs


fwiw, for testing, copy a single file, instead of a directory.
and post a full, complete debug log.


for a deeper look at the api calls, --dump=headers --retries=1

Good idea, the verbosity of my mass copy operation is why I didn't post a full log to begin with. We are kind of in disaster recovery mode at the moment. Long story short, we can't really access the old bucket with our existing tools/apps because there are corrupt metadata files in a folder somewhere, so we are moving our user's files one client folder at a time to a new bucket.
without-server-side.log (20.1 KB)

So, I tried this one before posting, but this results in a 401 unauthorized on every attempt to move files. I assumed earlier it does not work with B2 as a backend. Although when I use --dump=headers, it confirms that it is indeed using the correct API, b2_copy_file in the log when I add this flag. All of the b2_authorize_account calls succeed.
server-side-across-buckets.log (638.8 KB)

OK, so I had a bit of inspiration and thought "Perhaps the people who normally use B2 aren't using separate application keys for every bucket like we are." And sure enough, this seems to be the cause of the issue. If I use our B2 account's master application key to copy between the buckets, server side copy works. (i.e. rclone copy Master:ManagedFile/ABC Master:ManagedFileS3/ABC --server-side-across-configs ...) But if I use the individual keys for the two buckets, it downloads and re-uploads. We can get past our issues using the master key for now, but I think this may be a bug (assuming it is not a shortcoming with the B2 API, which I could easily believe)

I had some thought about it and realized that yes, it would seemingly be impossible to copy across buckets with separate application keys using the server side API. The auth token is made from the application key, and only one auth token can be supplied to a call to b2_copy_file , so I imagine it is impossible to implement. I will mark this as solved

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.