Rclone stuck on 0 bytes

What is the problem you are having with rclone?

I'm trying to transfer files from my small Linux box (Proxmox) up to Backblaze b2 through Rclone. A directory of around 3TB with each file (on average) between 50-100GB.

I have managed to get 3 of the smallest (<10 GB) files up to the cloud with relative ease, but after that I'm finding it impossible to upload further files.

Run the command 'rclone version' and share the full output of the command.

rclone v1.68.2
- os/version: debian 12.8 (64 bit)
- os/kernel: 6.8.12-2-pve (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.23.3
- go/linking: static
- go/tags: none

I can verify that this is the latest version of rclone.

Which cloud storage system are you using? (eg Google Drive)

Backblaze b2

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copyto {source_file_name} b2:{my_bucket}/{dest_file_name} --no-check-dest --no-traverse -P -vv --dump headers

I have tried with and without the --no-check-dest and --no-traverse flags but from what I can tell, they don't seem to be making any difference.

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

[b2]
type = b2
account = XXX
key = XXX

A log from the command that you were trying to run with the -vv flag

# rclone copyto {source_file_name} b2:{my_bucket}/{dest_file_name} --no-check-dest --no-traverse -P -vv --dump headers
2024/11/28 14:15:02 DEBUG : Setting --fast-list "true" from environment variable RCLONE_FAST_LIST="1"
2024/11/28 14:15:02 NOTICE: Automatically setting -vv as --dump is enabled
2024/11/28 14:15:02 DEBUG : rclone: Version "v1.68.2" starting with parameters ["rclone" "copyto" "{source_file_name}" "b2:{bucket}/{dest_file_name}" "--no-check-dest" "--no-traverse" "-P" "-vv" "--dump" "headers"]
2024/11/28 14:15:02 DEBUG : Creating backend with remote "{dest_file_name}"
2024/11/28 14:15:02 DEBUG : Using config file from "/root/.config/rclone/rclone.conf"
2024/11/28 14:15:02 DEBUG : fs cache: adding new entry for parent of "{source_file_name}", "/path/to/folder"
2024/11/28 14:15:02 DEBUG : Creating backend with remote "b2:{bucket}/"
2024/11/28 14:15:02 DEBUG : You have specified to dump information. Please be noted that the Accept-Encoding as shown may not be correct in the request and the response may not show Content-Encoding if the go standard libraries auto gzip encoding was in effect. In this case the body of the request will be gunzipped before showing it.
2024/11/28 14:15:02 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2024/11/28 14:15:02 DEBUG : HTTP REQUEST (req 0xc0001383c0)
2024/11/28 14:15:02 DEBUG : GET /b2api/v1/b2_authorize_account HTTP/1.1
Host: api.backblazeb2.com
User-Agent: rclone/v1.68.2
Authorization: XXXX
Accept-Encoding: gzip

2024/11/28 14:15:02 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2024/11/28 14:15:03 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2024/11/28 14:15:03 DEBUG : HTTP RESPONSE (req 0xc0001383c0)
2024/11/28 14:15:03 DEBUG : HTTP/1.1 200
Content-Length: 883
Cache-Control: max-age=0, no-cache, no-store
Connection: keep-alive
Content-Type: application/json;charset=UTF-8
Date: Thu, 28 Nov 2024 14:15:03 GMT
Server: nginx
Strict-Transport-Security: max-age=63072000

2024/11/28 14:15:03 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2024/11/28 14:15:03 DEBUG : fs cache: renaming cache item "b2:{bucket}/" to be canonical "b2:{bucket}"
2024/11/28 14:15:03 DEBUG : {dest_file_name}: Need to transfer - File not found at Destination
2024/11/28 14:15:03 DEBUG : {dest_file_name}: multi-thread copy: disabling buffering because source is local disk

At the time of writing, I've been trying to upload through rclone for 15 minutes and still have yet to pass a single byte down the wire. It feels like it's blocked on something.

In playing around with different values, I managed to get rclone to accidentally create a new file of the one I'm trying to send on that same drive (which made me think it was working) - that started instantly. So I'm wondering if the program is waiting on a response from somewhere before it can kick off?

welcome to the forum,

i know with S3 remotes, upload is a two step process. maybe that applies to B2 remotes?

  1. rclone calculates the checksum of the local file. the larger the source file, the longer that takes.
  2. rclone uploads the file.
  • rclone calculates the checksum of the local file. the larger the source file, the longer that takes.

Do you have experience of how long that has taken in the past? I feel that 15 minutes is a rather long time to be calculating a checksum. For context, the "test" file (which is a genuine backup attempt) that I'm trying to upload is 71GB in size.

I'll try to leave this overnight and see whether it resolves itself, but I am skeptical.

test it yourself. check how long it takes to run sha1sum for example. It will give you some baseline. Depending on CPU and disk it can take some time to calculate hash of 100GB file.

1 Like

Depending on CPU and disk it can take some time to calculate hash of 100GB file.

Yeah I can see straight away that sha1sum is taking a good chunk of time to work its magic - I don't think it's the IO that's the bottle neck, but it's certainly not fast.

Is there any way to do a blind copy to the remote through rclone? I know that I won't have a duplicate in the remote for any of the files that I want to upload. I thought that --no-check-dest and --no-traverse would sort that?

Maybe rclone isn't the right tool for the problem I'm trying to solve?

All depends how much you care about data integrity. If not so much:

--b2-disable-checksum

Disable checksums for large (> upload cutoff) files.

Normally rclone will calculate the SHA1 checksum of the input before uploading it so it can add it to metadata on the object. This is great for data integrity checking but can cause long delays for large files to start uploading.

1 Like

--b2-disable-checksum

Yep that's the one, started straight away.

I think I was initially hesitant to use any of the "advanced" features on B2 due to my inexperience.

I'll proceed without the checksum for now, and if there's an obvious problem with the upload I'll just swallow the additional time required to compute the checksum by running the whole thing overnight.

Based on the speeds I'm getting for the upload (and the disk usage I could see in htop when running checksum), I have a feeling checksum could take 2-4 hours to parse the entire 71GB file.

Thanks so much for the very fast responses!

1 Like

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.