WriteFileHandle: Can't open for write without O_TRUNC on existing file without --vfs-cache-mode >= writes


I'm trying to migrate from GDrive File Stream to rclone mount, and I'm still trying to figure out the best settings for my environment.
I'm very closed but hit a roadblock:
WriteFileHandle: Can't open for write without O_TRUNC on existing file without --vfs-cache-mode >= writes

If I add --vfs-cache-mode writes to the mount command, I no longer see this error, but I get:
RWFileHandle.Release error: failed to transfer file from cache to remote: corrupted on transfer: MD5 crypted hash differ
And after that, the file disappears from the mount which is pretty scary as when I want this up and running for good, I wanna make sure the file I move from local to remote will be uploaded as expected..

My goal is running a mixed environment, including streaming and I would like to keep a large part of the downloaded chunks and recently uploaded files in cache.

Here's my setup for Gdrive >> Cache >> Crypt >> Mount.

Windows Server 2016

rclone v1.50.2

  • os/arch: windows/amd64
  • go version: go1.13.4

Gdrive Remote config:
type = drive
scope = drive

type = cache
remote = GDrive:/Folder1
chunk_size = 32M
info_age = 1w
chunk_total_size = 25G
chunk_path = D:\rclone\cache\chunk
db_path = D:\rclone\cache\chunkdb
tmp_upload_path = D:\rclone\cache\cachetmpupload
writes = true
chunk_clean_interval = 6h

type = crypt
remote = GDriveCache:
filename_encryption = standard
directory_name_encryption = true

Mount command (automount via Schedule Task as SYSTEM):
--config "C:\rclone\rclone.conf"
mount Crypt: Q:
--buffer-size 64M
--dir-cache-time 72h
--drive-chunk-size 16M
--timeout 1h
--drive-pacer-min-sleep 120ms
-o uid=65792
--cache-dir "D:\rclone\cache\mountcache"
--low-level-retries 3
--log-level INFO


That probably means the files was altered while it was transferred...

Can you try v1.51.0 which was just released?

How exactly are you transferring files to the mount?


I'm seeing the same behavior as before.

A simple Copy/Paste of a large file (>2GB) or "Create a new Text Document". The new text document sometimes works, sometimes doesn't.

Is it possible that one of the things "breaking" this is the fact that I see an event per minute for Cleaned the cache: objects 6 (was 6), total size 7 (was 19)? I don't get why that happens as I though this was related to chunk_clean_interval. Am I missing any other setting perhaps?

Thanks as always for helping out :slight_smile:

Can you try this without the cache backend? I am suspicious that it might be causing the problem?

It seems like you're right.. Looks like it's working now, I do see the INF events that keep saying "Cleaned the cache", with always the same amount of files/size.
What do you think is the cause at the cache level? I'd really like to try and use it.

Also, something weird, in my config when I launch the mount command, the cache is set here: D:\rclone\cache\mountcache. And I can see the structure and the file just uploaded. However, as soon as I delete it and open the file through the mount, it still seems like the file is opened from the local cache, I have no idea where? It's definitely not in RAM and definitely not downloading it from Gdrive.


That is the VFS cache which is needed to make --vfs-cache-mode writes work

I don't know exactly. I don't have a cache backend maintainer at the moment to ask :frowning:

One thing you could try is disabling the tmp_upload_path - that has caused problems in the past.

It could be in the tmp_upload_path?


I didn't have time to test the rest unfortunately and I kept it running without cache for now. The only reason why I wanted to keep the cache was for avoiding to download the same chunks over and over. Is there any way to achieve it without the cache and without --vfs-cache-mode full (which based on the docs will download the whole file before using it)?

If I have time, I'll try again with the cache without the tmp_upload_path (if it works then, perhaps I can fool it with a symbolic link?).


That is correct, for now!

Let me know what happens!

Hey - Sorry, I've been superbusy, but started looking back at this today after a week of API bans (something Plex wise must have happened that is killing Drive File Stream).

Anyway, I'm back on with the cache, without tmp_upload_path and it works as expected.

I have an issue though, right before a file is copied to the mount remote, rclone starts uploading it, once it completes, the file is seen as modified again by the original copy and this ends up in a double upload:
I the screenshot above, I copied 17.5GB worth of files from the gdrive remote, over to the mount point (Q:...) and it ended up uploading all files twice.

Second quick question: Is there a way to keep the chunk files also after a re-mount? Right now they're cleared when I stop the process and start it again, but with Plex indexing most files, I think it would be way better if I could keep them always there, unless they expire?


Are you using 1.51.0? I fixed a very similar bug recently: https://github.com/rclone/rclone/commit/84191ac6dc600987686de342315b435bfdd45007

If you remove --cache-db-purge from your command line I think it will persist!

Yep, I am. But for now, I think I've sorted it by using a second crypt remote (that doesn't use the cache remote and connects straight to the original gdrive remove) and I'm running a copy from/to these two remotes.

Yes, I'm an idiot :slight_smile: - I just tested it and it retained the chunks.

Great - that is probably what I'd recommend .

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.