What is the problem you are having with rclone?
Unable to open ("read") files through rclone -> cloudflare CDN -> B2
Run the command 'rclone version' and share the full output of the command.
$ rclone version
rclone v1.57.0-DEV - os/version: almalinux 9.4 (64 bit)
- os/kernel: 5.14.0-70.13.1.el9_0.x86_64 (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.17.2 - go/linking: dynamic
- go/tags: none
Which cloud storage system are you using? (eg Google Drive)
Backblaze B2, attempting to use Cloudflare's CDN worker because of the egress savings
The command you were trying to run (eg rclone copy /tmp remote:tmp)
$ rclone --b2-download-url https://<proxy>.<account>.workers.dev -vv --dump headers mount b2crypt: ./b2crypt/ --vfs-cache-mode writes --log-file rclone.log
$ less ./b2crypt/media_nomenclature.txt
<exit because of read errors>
Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.
[b2]
type = b2
account = <key-id>
key = <key>
hard_delete = true
[b2crypt]
type = crypt
remote = b2:<bucket>/crypt/
password = <crypt-password>
A log from the command that you were trying to run with the -vv flag
$ cat rclone.log
2024/07/05 09:31:32 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2024/07/05 09:31:32 DEBUG : Couldn't decode error response: invalid character '<' looking for beginning of value
2024/07/05 09:31:32 DEBUG : &{media_nomenclature.txt (r)}: >Read: read=0, err=no such file or directory
2024/07/05 09:31:32 DEBUG : media_nomenclature.txt: Attr:
2024/07/05 09:31:32 DEBUG : media_nomenclature.txt: >Attr: a=valid=1s ino=0 size=726 mode=-rw-r--r--, err=<nil>
2024/07/05 09:31:32 DEBUG : &{media_nomenclature.txt (r)}: Read: len=4096, offset=0
2024/07/05 09:31:32 DEBUG : media_nomenclature.txt: ChunkedReader.openRange at 0 length 134217728
2024/07/05 09:31:32 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2024/07/05 09:31:32 DEBUG : HTTP REQUEST (req 0xc0004bd300)
2024/07/05 09:31:32 DEBUG : GET /file/<bucket>/crypt/laa49rudcp79g72adgqg7ddeq39kathvkh8035ig9a7553pned70 HTTP/1.1
Host: <proxy>.<account>.workers.dev
User-Agent: rclone/v1.57.0-DEV
Authorization: XXXX
Range: bytes=0-
2024/07/05 09:31:32 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2024/07/05 09:31:32 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2024/07/05 09:31:32 DEBUG : HTTP RESPONSE (req 0xc0004bd300)
2024/07/05 09:31:32 DEBUG : HTTP/2.0 404 Not Found
Content-Length: 137
Alt-Svc: h3=":443"; ma=86400
Cache-Control: max-age=0, no-cache, no-store
Cf-Cache-Status: DYNAMIC
Cf-Ray: 89e80f2bcd494bd8-BUF
Content-Type: application/xml
Date: Fri, 05 Jul 2024 14:31:32 GMT
Nel: {"success_fraction":0,"report_to":"cf-nel","max_age":604800}
Report-To: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=qwqVy5dvC2Z3%2FQ5BoJKxVU2Jv0XDuxEO0Dqwhgq3Of%2BAcCFniCqeCAMcK2ZflQjan6ojwTaNZa8sUBPrOXiRdp66Z%2B1Iu%2BOGi4wWzy6n72ku%2FP0CY31Oqm41hPeVk%2FjNXZ3FS578pcIEJIMPTcjv%2BgVRPv7OZISHJeVf%2BGMOHXa3"}],"group":"cf-nel","max_age":604800}
Server: cloudflare
Strict-Transport-Security: max-age=63072000
X-Amz-Id-2: aMT9h7mGNOCVklGNhM1A43zDlMYJkLWL1
X-Amz-Request-Id: a1dad0e5fca3663b
I do not know I see said 404 errors, since media_nomenclature.txt definitely exists:
$ ls ./b2crypt/
<other-files>
media_nomenclature.txt
<other-files>
Notes:
This is my wrangler.toml for Cloudflare's wrangler:
[vars]
B2_APPLICATION_KEY_ID = <key-id>
B2_ENDPOINT = <b2-endpoint>
# Putting the bucket name directly seemed the easiest to me; I doubt this is a problem
BUCKET_NAME = <bucket-name>
ALLOW_LIST_BUCKET = "false"
I have uploaded the encrypted key from the relevant B2 app-key using echo <key> | wrangler secret put B2_APPLICATION_KEY.
I am able to list files, however I am unable to open them. I've been trying this for a few days with new keys and redoing the process and I can't figure out what I'm doing wrong.
Help is much appreciated!
Thanks.
Edit:
The fix is in the link provided in the "solution": switch to the s3 backend for your b2 bucket, and Cloudflare stops having seizures on your files. The issue was also the same as that in the link - Cloudflare appended /file/ the actual name of the file in the path which prevented it from working, and switching to s3 fixed it.