About --vfs-cache-mode full, if i watch entire a video file and the free space < total size of the video file (let's say i have 20 GB free spaces and watch a 40 GB video), what will it do since there's only 1 sparse file in cache dir that keep growing? Does --vfs-read-chunk-size-limit help with limit the total size of a sparse file?
I've been trying the latest beta out today in writes mode, as the drawbacks of full mode's caching (mainly not being able to keep chunks of a file, only the whole file or nothing) don't seem worth it to me at the moment.
Read operations seem to all be working great, faster than with a cache remote in the mix like I had before. I have run in to an issue writing/uploading files though. I dropped in 3 files in quick succession, and the first two uploaded correctly, but the third threw an error, and then the mount seemed to hang (or at least any process trying to access that file/directory was non responsive). I aborted the FUSE connection, restarted the mount, and the file uploaded and everything seems happy now.
Unfortunately I didn't have debug logging on at the time, but here's what I got at the INFO level. This is a Google Drive remote with a crypt layer on top, which is exclusively accessed via this mount (hence the long timeouts/cache/checks) with the following mount options:
Of course, there are some other things in here possibly affecting things. I've got attr-timeout down to 0 after the fix in https://github.com/rclone/rclone/issues/4104, and I've also re-enabled http2 for Google Drive. Neither of these options seemed to cause any issues with the cache remote for the last few weeks.
Okay, finally, the one log message I got about this:
2020/06/14 17:57:58 ERROR : REDACTED.mp4: Failed to copy: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails&supportsAllDrives=true&uploadType=resumable&upload_id=AAANsUn0YL2KoZR5_fU1GKqvfTBF7pD_Ck2B_YUAqp3kj78cO-dFOHpPAX05UOgTaApuHzqqbwCTjfKRZJuzuPXSrWA": context canceled
While the file is in use it isn't eligible for deletion even if your cache is over quota. When it is closed it is, so in this case the file would be deleted at the next cache clearing run.
Potentially rclone could delete an in use file and it would immediately be recreated empty. That might be something to consider doing later. So if the cache went over quota it would drop the whole file.
I'm glad it uploaded on restart That is a new feature!
Upload aborted because the context was cancelled is either a bug or fallout from a previous problem. Did you maybe modify the file, wait more than 5 seconds so it started uploading then open it and modify it again? That would cause the upload to be aborted. It should have been retried though
If you can cause the problem again with debugging that would be great. If the mount locks up then kill -QUIT it to get a full trace - it will be apparent why it locked up from that.
The file was post-processed, moved to the mount, and then Plex would have started scanning it immediately. I don't believe there would have been another write operation in there, but I can't be sure.
I actually did just cause this again with processing just a single file, so there does seem to be something strange happening here with this config. I'm going to try this with a test mount with debug logging on, but won't be able to do it right now. I'll report back with full debug logs in a few hours hopefully.
Success! So I replicated this by immediately attempting to read the file I just copied into the mount (in this case by just reading the file with ffmpeg, similar to what Plex would do), which seems to trigger a change in the file modification time, which cancels the upload and then seems to try and open the file read-only, but that's where things then hang.
Yes, that's my understanding as well, but when a file is evicted from the cache, it is based on oldest file created, not oldest chunk, and then will remove the entire file, not just partial chunks. So if a file is opened for read more than once, eviction time will be based on the initial open of the file, not if those earlier chunks are still being accessed. At least that's my understanding from @ncw's comments in this thread.
the issue (as I understand it) is because it's not storing individual chunks, but storing individual files. Therefore, it has no metadata (and would be overly expensive to maintain said metadata) and therefore the only evictable object is the file and the only real metadata it has to do that is the file system's last access time.
it would have to store the files in individual file chunks to enable per chunk evicting (personally I think that's the way it should go, but dont have time to even look at how to do that ATM, so beggars can't be choosers)
The files are evicted based on last access time not creation time. So the first file to be evicted will be the one which hasn't been accessed for the longest time.
The whole file (all the chunks in the sparse file) will be evicted at that point.
We do have a record of exactly what the chunks are in the file so it would be possible to evict chunks of the file based on their access time. However that is a lot more complex and I don't want to go there unless absolutely necessary!
Sleeping for 8 seconds puts me at the point where the file is uploading, then the first invocation of file will interrupt the upload, but it's able to read the file. Waiting for a second and then attempting to read the file again, and that's where there's a deadlock.