"operation not permitted" when flushing after opening and closing a O_WRONLY|O_APPEND file without writing it -- Cloudflare's R2 only

What is the problem you are having with rclone?

Run the command 'rclone version' and share the full output of the command.

Hi!

I'm using rclone v1.69:

# rclone version
rclone v1.69.0
- os/version: debian 12.9 (64 bit)
- os/kernel: 6.1.0-28-cloud-amd64 (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.23.4
- go/linking: static
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Cloudflare R2

The command you were trying to run (eg rclone copy /tmp remote:tmp)

I'm mounting an R2 bucket like this:

# rclone mount r2:some-bucket /some-bucket-r2/ -vv

...and I have a Python code snippet that just checks if a file on the bucket can be read and written, like this:

~  python
Python 3.10.10 | packaged by conda-forge | (main, Mar 24 2023, 20:08:06) [GCC 11.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> filename = '/some-bucket-r2/Lion 22077/registered/2025-01-20 Elephant R XYZ RGB.tif'
>>> file_object = open(filename, "ab", 8)
>>> file_object.close()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
PermissionError: [Errno 1] Operation not permitted
>>> 

The rclone config contents with secrets removed.

Here it is. Most settings are set based on our usecase (we handle pretty big files) in our bucket:

[r2]
type = s3
provider = Cloudflare
access_key_id = <access_key>
region = auto
endpoint = https://<endpoint>.r2.cloudflarestorage.com
secret_access_key = <secret_access_key>
profile = r2

# some-bucket rclone global params (https://rclone.org/flags/):
buffer-size = 64M
checksum = false  # checksum should = !s3-disable-checksum above/below
fast-list = true 
human-readable = true
log-level = INFO
stats-one-line-date = true
streaming-upload-cutoff = 64M # when a streaming write is switched to chunked
update = true
use-json-log = true
use-mmap = true
use-server-modtime = true
user-agent = rclone-agent/v1

# some-bucket rclone s3 params (https://rclone.org/s3/):
s3-chunk-size = 256M
s3-disable-checksum = true  # checksum should = !s3-disable-checksum above/below
s3-memory-pool-flush-time = 5m0s
s3-memory-pool-use-mmap = true
s3-upload-concurrency = 2
s3-upload-cutoff = 256M
s3-use-accelerate-endpoint = true
# s3-storage-class = INTELLIGENT_TIERING  # N/A for Cloudlare

# some-bucket rclone mount params (https://rclone.org/commands/rclone_mount/)
allow-other = true
attr-timeout = 4s  # how long kernel holds cache before ipc to rclone
cache-dir = /tmp/some-bucket-r2
dir-cache-time = 15m0s
poll-interval = 0  # s3 doesn't support polling
max-read-ahead = 256M
transfers = 32  # max number of write back uploads to S3 at once
umask = 2. # User: r/w/x, Group: r/w/x, Other: r/x
remotePath = some-bucket
vfs-cache-max-age = 4h
vfs-cache-mode = full
vfs-cache-min-free-size = 20G
vfs-cache-max-size = 120G 
vfs-read-ahead = 256M
vfs-read-chunk-size = 256M
vfs-read-chunk-size-limit = 1G
vfs-write-back = 30s  # how long to wait after file handle closed before writing back to s3
vfs-write-wait = 4s  # affects write-ahead, don't set too high probably
write-back-cache = true

A log from the command with the -vv flag

2025/01/24 00:30:24 DEBUG : /: Lookup: name="Lion 22077"
2025/01/24 00:30:24 DEBUG : /: >Lookup: node=Lion 22077/, err=<nil>
2025/01/24 00:30:24 DEBUG : Lion 22077/: Attr:
2025/01/24 00:30:24 DEBUG : Lion 22077/: >Attr: attr=valid=1s ino=0 size=0 mode=drwxr-xr-x, err=<nil>
2025/01/24 00:30:24 DEBUG : Lion 22077/: Lookup: name="registered"
2025/01/24 00:30:24 DEBUG : Lion 22077/: >Lookup: node=Lion 22077/registered/, err=<nil>
2025/01/24 00:30:24 DEBUG : Lion 22077/registered/: Attr:
2025/01/24 00:30:24 DEBUG : Lion 22077/registered/: >Attr: attr=valid=1s ino=0 size=0 mode=drwxr-xr-x, err=<nil>
2025/01/24 00:30:24 DEBUG : Lion 22077/registered/: Lookup: name="2025-01-20 Elephant R XYZ RGB.tif"
2025/01/24 00:30:24 DEBUG : Lion 22077/registered/: >Lookup: node=Lion 22077/registered/2025-01-20 Elephant R XYZ RGB.tif, err=<nil>
2025/01/24 00:30:24 DEBUG : Lion 22077/registered/2025-01-20 Elephant R XYZ RGB.tif: Attr:
2025/01/24 00:30:24 DEBUG : Lion 22077/registered/2025-01-20 Elephant R XYZ RGB.tif: >Attr: a=valid=1s ino=0 size=117262627 mode=-rw-r--r--, err=<nil>
2025/01/24 00:30:24 DEBUG : Lion 22077/registered/2025-01-20 Elephant R XYZ RGB.tif: Open: flags=OpenWriteOnly+OpenAppend
2025/01/24 00:30:24 DEBUG : Lion 22077/registered/2025-01-20 Elephant R XYZ RGB.tif: Open: flags=O_WRONLY|O_APPEND
2025/01/24 00:30:24 DEBUG : Lion 22077/registered/2025-01-20 Elephant R XYZ RGB.tif: >Open: fd=Lion 22077/registered/2025-01-20 Elephant R XYZ RGB.tif (w), err=<nil>
2025/01/24 00:30:24 DEBUG : Lion 22077/registered/2025-01-20 Elephant R XYZ RGB.tif: >Open: fh=&{Lion 22077/registered/2025-01-20 Elephant R XYZ RGB.tif (w)}, err=<nil>
2025/01/24 00:30:25 DEBUG : &{Lion 22077/registered/2025-01-20 Elephant R XYZ RGB.tif (w)}: Flush:
2025/01/24 00:30:25 DEBUG : Lion 22077/registered/2025-01-20 Elephant R XYZ RGB.tif: WriteFileHandle.Flush unwritten handle, writing 0 bytes to avoid race conditions
2025/01/24 00:30:25 ERROR : Lion 22077/registered/2025-01-20 Elephant R XYZ RGB.tif: WriteFileHandle: Can't open for write without O_TRUNC on existing file without --vfs-cache-mode >= writes
2025/01/24 00:30:25 DEBUG : &{Lion 22077/registered/2025-01-20 Elephant R XYZ RGB.tif (w)}: >Flush: err=operation not permitted
2025/01/24 00:30:25 DEBUG : &{Lion 22077/registered/2025-01-20 Elephant R XYZ RGB.tif (w)}: Release:
2025/01/24 00:30:25 DEBUG : Lion 22077/registered/2025-01-20 Elephant R XYZ RGB.tif: WriteFileHandle.Release closing
2025/01/24 00:30:25 DEBUG : &{Lion 22077/registered/2025-01-20 Elephant R XYZ RGB.tif (w)}: >Release: err=<nil>

NOTE: This error doesn't happen when using AWS S3, given similar code, rclone version and configuration (we're testing switching from S3 to R2).

We're considering different workarounds in our code but, I think maybe there's a bug on rclone's Cloudflare implementation. I want to cache reads from my bucket, I don't want to truncate the open file, just want to check if I can read and write it using the typical read()/write() calls, not via filesystem permissions. Why this doesn't happen with S3?

Thank you!

welcome to the forum,

that should apply to most/all storage providers, as it relates to mount command.


i know from prior experience, that the error will occur with s3.
to confirm, just now, i did a three quick tests using aws, wasabi and cloudlflare

in all cases, i get the same basic error.

ERROR : file.ext: WriteFileHandle: Truncate: Can't change size without --vfs-cache-mode >= writes

so far, not seeing a rclone bug?

Hi @asdffdsa!

Thank you for answering this question so fast.

Ok, here's where I actually tell you the complete story: we're not mounting the AWS s3 bucket with rclone mount but with this rclone k8s CSI driver instead:

To sum the whole thing I've assumed it just rclone mount the AWS S3 bucket:

...and, I'm passing mostly the same parameters I've described in the rclone.conf for Cloudflare but via volumeAttributes in the PersistentVolume manifest for the S3.

In any case, and despite my confusion around AWS S3 / Cloudflare experience from my side, if the user does this:

>>> filename = '/some-bucket-r2/Lion 22077/registered/2025-01-20 Elephant R XYZ RGB.tif'
>>> file_object = open(filename, "ab", 8)
>>> file_object.close()

Does this implicit underlying write should exist? Because it's not something the user asked for...

WriteFileHandle.Flush unwritten handle, writing 0 bytes to avoid race conditions

I know what a race condition is, and I'm not questioning how it's managing the situation, but, shouldn't there be a way of saying "I know what I'm doing rclone, I'm managing file locking on my own, don't add spurious writes for me"?

Thank you very much!

global flags are not supported in config file.

does nothing on a mount command

in the config file, that should be chunk_size = 256M

It is the append "a" that is causing the problem here.

Do you need that?

If you do need append then add --vfs-cache-mode writes which will fix the problem.