I cannot get rclone to work with a backblaze b2 east region account. Does it require a custom "endpoint" option in advanced setting?
Run the command 'rclone version' and share the full output of the command.
rclone version
rclone v1.64.2
os/version: fedora 37 (64 bit)
os/kernel: 6.5.7-100.fc37.aarch64 (aarch64)
os/type: linux
os/arch: arm64 (ARMv8 compatible)
go/version: go1.21.3
go/linking: static
go/tags: none
Which cloud storage system are you using? (eg Google Drive)
Backblaze B2 east region
The command you were trying to run (eg rclone copy /tmp remote:tmp)
Failed test copy to bucket on backblaze b2 east region account command and output below.
I validated my id and key are correct with winscp, which works.
# rclone -vvv copy testfile.txt b2-east:stretcher-reminted-murmurs/Backuppc
2023/10/25 14:31:58 DEBUG : rclone: Version "v1.64.2" starting with parameters ["rclone" "-vvv" "copy" "testfile.txt" "b2-east:stretcher-reminted-murmurs/Backuppc"]
2023/10/25 14:31:58 DEBUG : Creating backend with remote "testfile.txt"
2023/10/25 14:31:58 DEBUG : Using config file from "/root/.config/rclone/rclone.conf"
2023/10/25 14:31:58 DEBUG : fs cache: adding new entry for parent of "testfile.txt", "/storage/backup/local/scripts"
2023/10/25 14:31:58 DEBUG : Creating backend with remote "b2-east:stretcher-reminted-murmurs/Backuppc"
2023/10/25 14:31:59 DEBUG : pacer: low level retry 1/10 (error incident id 144c2242ecf0-9b385a1991aeb798 (500 internal_error))
2023/10/25 14:31:59 DEBUG : pacer: Rate limited, increasing sleep to 20ms
2023/10/25 14:31:59 DEBUG : pacer: low level retry 2/10 (error incident id 2fab8da5242f-5028bcaba93b537e (500 internal_error))
2023/10/25 14:31:59 DEBUG : pacer: Rate limited, increasing sleep to 40ms
2023/10/25 14:31:59 DEBUG : pacer: low level retry 3/10 (error incident id 144c2242ecf0-1ca9a856453edc71 (500 internal_error))
2023/10/25 14:31:59 DEBUG : pacer: Rate limited, increasing sleep to 80ms
2023/10/25 14:31:59 DEBUG : pacer: low level retry 4/10 (error incident id 144c2242ecf0-2ab22038d686dd72 (500 internal_error))
2023/10/25 14:31:59 DEBUG : pacer: Rate limited, increasing sleep to 160ms
2023/10/25 14:31:59 DEBUG : pacer: low level retry 5/10 (error incident id 144c2242ecf0-92b8a0686bb4e45b (500 internal_error))
2023/10/25 14:31:59 DEBUG : pacer: Rate limited, increasing sleep to 320ms
2023/10/25 14:31:59 DEBUG : pacer: low level retry 6/10 (error incident id 144c2242ecf0-9f90a7099bff22e2 (500 internal_error))
2023/10/25 14:31:59 DEBUG : pacer: Rate limited, increasing sleep to 640ms
2023/10/25 14:32:00 DEBUG : pacer: low level retry 7/10 (error incident id 144c2242ecf0-93c0fc612361bfc2 (500 internal_error))
2023/10/25 14:32:00 DEBUG : pacer: Rate limited, increasing sleep to 1.28s
2023/10/25 14:32:00 DEBUG : pacer: low level retry 8/10 (error incident id 144c2242ecf0-0062fdd9202b8ed2 (500 internal_error))
2023/10/25 14:32:00 DEBUG : pacer: Rate limited, increasing sleep to 2.56s
2023/10/25 14:32:02 DEBUG : pacer: low level retry 9/10 (error incident id 144c2242ecf0-6b53fd159081f298 (500 internal_error))
2023/10/25 14:32:02 DEBUG : pacer: Rate limited, increasing sleep to 5.12s
2023/10/25 14:32:04 DEBUG : pacer: low level retry 10/10 (error incident id 2fab8da5242f-327080fb42f975a4 (500 internal_error))
2023/10/25 14:32:04 DEBUG : pacer: Rate limited, increasing sleep to 10.24s
2023/10/25 14:32:04 Failed to create file system for "b2-east:stretcher-reminted-murmurs/Backuppc": failed to authorize account: failed to authenticate: incident id 2fab8da5242f-327080fb42f975a4 (500 internal_error)
Successful test copy to bucket on backblaze b2 west region account
# rclone -vvv copy testfile.txt b2-west:lkajdslitest/Backuppc
2023/10/24 21:58:40 DEBUG : rclone: Version "v1.64.2" starting with parameters ["rclone" "-vvv" "copy" "testfile.txt" "b2-west:lkajdslitest/Backuppc"]
2023/10/24 21:58:40 DEBUG : Creating backend with remote "testfile.txt"
2023/10/24 21:58:40 DEBUG : Using config file from "/root/.config/rclone/rclone.conf"
2023/10/24 21:58:40 DEBUG : fs cache: adding new entry for parent of "testfile.txt", "/storage/backup/local/scripts"
2023/10/24 21:58:40 DEBUG : Creating backend with remote "b2-west:lkajdslitest/Backuppc"
2023/10/24 21:58:41 DEBUG : Couldn't decode error response: EOF
2023/10/24 21:58:41 DEBUG : Couldn't decode error response: EOF
2023/10/24 21:58:41 DEBUG : testfile.txt: Need to transfer - File not found at Destination
2023/10/24 21:58:43 DEBUG : testfile.txt: sha1 = 6476df3aac780622368173fe6e768a2edc3932c8 OK
2023/10/24 21:58:43 INFO : testfile.txt: Copied (new)
2023/10/24 21:58:43 INFO :
Transferred: 15 B / 15 B, 100%, 14 B/s, ETA 0s
Transferred: 1 / 1, 100%
Elapsed time: 2.1s
Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.
I have tried multiple settings for the "endpoint" option, including leaving it at default. Nothing works.
root@exarkun scripts]# rclone config redacted
[archive-b2]
type = b2
account = XXX
key = XXX
hard_delete = true
[archive-b2-crypt]
type = crypt
remote = archive-b2:backuprI127JzAVoc3s3b
filename_encryption = off
directory_name_encryption = false
password = XXX
password2 = XXX
[b2-east]
type = b2
account = XXX
key = XXX
hard_delete = true
endpoint = https://s3.us-east-005.backblazeb2.com
[b2-west]
type = b2
account = XXX
key = XXX
hard_delete = true
[gdrive]
type = drive
client_id =
client_secret =
token = XXX
root_folder_id = XXX
### Double check the config for sensitive info before posting publicly
A log from the command that you were trying to run with the -vv flag
Yes, I do have a separate account for each region. I noted that in my commands I'm trying to run. I also tested with winscp to validate nothing I did was wrong.
Seeing this weird address, I did some more testing.
For some reason dns is resolving the east endpoint to 192.168.1.1 which is nonsensical, especially since the west endpoint resolves to 206.190.208.254 just fine for me.
I did some further testing, and EVERY public DNS I query responds with 192.168.1.1 for the east endpoint
# while read line; do host s3.us-east-005.backblazeb2.com $line; done < publicdns
Using domain server:
Name: 8.8.8.8
Address: 8.8.8.8#53
Aliases:
s3.us-east-005.backblazeb2.com has address 192.168.1.1
s3.us-east-005.backblazeb2.com has address 192.168.1.1
Using domain server:
Name: 8.8.4.4
Address: 8.8.4.4#53
Aliases:
s3.us-east-005.backblazeb2.com has address 192.168.1.1
s3.us-east-005.backblazeb2.com has address 192.168.1.1
Using domain server:
Name: 76.76.2.0
Address: 76.76.2.0#53
Aliases:
s3.us-east-005.backblazeb2.com has address 192.168.1.1
s3.us-east-005.backblazeb2.com has address 192.168.1.1
Using domain server:
Name: 76.76.10.0
Address: 76.76.10.0#53
Aliases:
s3.us-east-005.backblazeb2.com has address 192.168.1.1
s3.us-east-005.backblazeb2.com has address 192.168.1.1
Using domain server:
Name: 9.9.9.9
Address: 9.9.9.9#53
Aliases:
s3.us-east-005.backblazeb2.com has address 192.168.1.1
s3.us-east-005.backblazeb2.com has address 192.168.1.1
Using domain server:
Name: 149.112.112.112
Address: 149.112.112.112#53
Aliases:
s3.us-east-005.backblazeb2.com has address 192.168.1.1
s3.us-east-005.backblazeb2.com has address 192.168.1.1
Using domain server:
Name: 1.1.1.1
Address: 1.1.1.1#53
Aliases:
s3.us-east-005.backblazeb2.com has address 192.168.1.1
s3.us-east-005.backblazeb2.com has address 192.168.1.1
Using domain server:
Name: 1.0.0.1
Address: 1.0.0.1#53
Aliases:
s3.us-east-005.backblazeb2.com has address 192.168.1.1
s3.us-east-005.backblazeb2.com has address 192.168.1.1
Using domain server:
Name: 208.67.222.222
Address: 208.67.222.222#53
Aliases:
s3.us-east-005.backblazeb2.com has address 192.168.1.1
s3.us-east-005.backblazeb2.com has address 192.168.1.1
Using domain server:
Name: 208.67.220.220
Address: 208.67.220.220#53
Aliases:
s3.us-east-005.backblazeb2.com has address 192.168.1.1
s3.us-east-005.backblazeb2.com has address 192.168.1.1
I'm located in Florida and am using Frontier internet. I also logged into a remote system for my work, located in a midwest city, and it has the same issue with the east endpoint dns resolution
I figured out it was my eero pro "advanced security" setting that was causing this.
My work datacenter must have a similar security stack that has this flagged.
Looking closer, I found that this URL was flagged a few years ago as a malware and phishing site, which likely causes the whole domain to be flagged by some security stacks...