I'm trying to migrate from GDrive File Stream to rclone mount, and I'm still trying to figure out the best settings for my environment.
I'm very closed but hit a roadblock: WriteFileHandle: Can't open for write without O_TRUNC on existing file without --vfs-cache-mode >= writes
If I add --vfs-cache-mode writes to the mount command, I no longer see this error, but I get: RWFileHandle.Release error: failed to transfer file from cache to remote: corrupted on transfer: MD5 crypted hash differ
And after that, the file disappears from the mount which is pretty scary as when I want this up and running for good, I wanna make sure the file I move from local to remote will be uploaded as expected..
My goal is running a mixed environment, including streaming and I would like to keep a large part of the downloaded chunks and recently uploaded files in cache.
Here's my setup for Gdrive >> Cache >> Crypt >> Mount.
Is it possible that one of the things "breaking" this is the fact that I see an event per minute for Cleaned the cache: objects 6 (was 6), total size 7 (was 19)? I don't get why that happens as I though this was related to chunk_clean_interval. Am I missing any other setting perhaps?
It seems like you're right.. Looks like it's working now, I do see the INF events that keep saying "Cleaned the cache", with always the same amount of files/size. What do you think is the cause at the cache level? I'd really like to try and use it.
Also, something weird, in my config when I launch the mount command, the cache is set here: D:\rclone\cache\mountcache. And I can see the structure and the file just uploaded. However, as soon as I delete it and open the file through the mount, it still seems like the file is opened from the local cache, I have no idea where? It's definitely not in RAM and definitely not downloading it from Gdrive.
I didn't have time to test the rest unfortunately and I kept it running without cache for now. The only reason why I wanted to keep the cache was for avoiding to download the same chunks over and over. Is there any way to achieve it without the cache and without --vfs-cache-mode full (which based on the docs will download the whole file before using it)?
If I have time, I'll try again with the cache without the tmp_upload_path (if it works then, perhaps I can fool it with a symbolic link?).
Hey - Sorry, I've been superbusy, but started looking back at this today after a week of API bans (something Plex wise must have happened that is killing Drive File Stream).
Anyway, I'm back on with the cache, without tmp_upload_path and it works as expected.
I have an issue though, right before a file is copied to the mount remote, rclone starts uploading it, once it completes, the file is seen as modified again by the original copy and this ends up in a double upload:
I the screenshot above, I copied 17.5GB worth of files from the gdrive remote, over to the mount point (Q:...) and it ended up uploading all files twice.
Second quick question: Is there a way to keep the chunk files also after a re-mount? Right now they're cleared when I stop the process and start it again, but with Plex indexing most files, I think it would be way better if I could keep them always there, unless they expire?
Thanks
Yep, I am. But for now, I think I've sorted it by using a second crypt remote (that doesn't use the cache remote and connects straight to the original gdrive remove) and I'm running a copy from/to these two remotes.
Yes, I'm an idiot - I just tested it and it retained the chunks.