If I remove text from a text file outside of my mounted union (e.g. browse to the actual drive location instead of the virtual drive created by rclone), the file opens with extra spaces matching the number of removed characters. If I then close this file and reopen it, the error is corrected.
What is your rclone version (output from rclone version)
1.53.3 and 1.53.4 (tried both)
Which OS you are using and how many bits (eg Windows 7, 64 bit)
Windows 10 x64
Which cloud storage system are you using? (eg Google Drive)
None. All local drives in a union.
The command you were trying to run (eg rclone copy /tmp remote:tmp)
If I edit the text file using the normally mounted location in Windows (e.g. ```
i:\SnapRAID_Pool\rclone_Pool\Test1.txt) , but then open it from the location mounted by rclone (z:\Test1.txt) the file has extra spaces on the end until I open it once, close it, then reopen it.
I also suspect the caching has something to do with it. I added caching early in my testing because I was getting errors about caching in the console without it.
I tried just mounting with:
rclone mount I:\SnapRAID_Pool\rclone_Pool z:
If I then try to modify (to add some extra lines for testing) the file through Z: I get errors about needing to cache writes to use this. If I then go modify the file through I: directly and then open through Z: I get a bunch of errors about unexpected EOF and the file will not open in Notepad. So I have to add my extra lines before mounting or the file is not able to be opened. If I then mount it, remove the lines through I:, then open through Z:, I get the same error.
2021/01/25 09:00:04 ERROR : Test1.txt: ReadFileHandle.Read error: unexpected EOF
2021/01/25 09:00:04 ERROR : IO error: unexpected EOF
After some testing it does look like that fixed it. I wouldn't think it'd be related since it wasn't a directory issue but I guess I was mistaken.
While we're here can you comment on my other settings at all? I'm just using this as a file server combining all the storage drives into one. It's mostly the chunk size settings and the cache times I'm concerned about.
In general editing things that a mount with --vfs-cache-mode full points to, not through the mount is asking for trouble until the item has fallen out of the directory cache.
I'll put comments on the flags
--vfs-read-chunk-size 1M - not needed for local transfers
--vfs-read-chunk-size-limit 10M - not needed for local transfers
--vfs-cache-max-age 1s - this is up to you but setting this short makes me think you could use `--vfs-cache-mode writes` instead of `full`
--attr-timeout 1s - this is OK but you'll probably find you don't need it
--poll-interval 1s - this is not used for local backends
I think I switched to full caching because I was getting errors on reads without it but maybe it was actually writes and I am remembering wrong. I will try it out.
Maybe I should mention that the cache dir is a ram disk, if that makes any difference. I think maybe with the chunk size thing I was thinking it would read and write in and out of the cache in smaller chunks, preventing the cache from filling up as much (since I only have 16GB RAM in the system and only assigned 10GB max to the ramdisk, it allocates dynamically).
Back with another question... maybe I should make a new topic?
I am wondering about the behaviour of the policies.
I was using EPLUS but ran into an issue where 1 directory was larger than the size of the drive and completely used up all the space in the drive and then starting giving a pile of out of space errors. I had assumed it would prioritize the existing path policy and then move to the next drive if the drive fills up but I guess I was wrong there?
I have now moved to LUS. But now I'm wondering if the used space measurement is a percentage or the actual used space. If it is actual used space I may end up in a situation where a smaller drive is completely full but still has less used space than a larger drive. Would it move on to the larger drive with space available or would I get out of space errors again?
I am avoiding using MFS as this seemed to prioritize my larger drives and I'd prefer to use the smaller drives first but maybe I am mistaken there and it just seemed that way.