Union - File not ending where it should on 1st read (but does on 2nd)

What is the problem you are having with rclone?

If I remove text from a text file outside of my mounted union (e.g. browse to the actual drive location instead of the virtual drive created by rclone), the file opens with extra spaces matching the number of removed characters. If I then close this file and reopen it, the error is corrected.

What is your rclone version (output from rclone version)

1.53.3 and 1.53.4 (tried both)

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Windows 10 x64

Which cloud storage system are you using? (eg Google Drive)

None. All local drives in a union.

The command you were trying to run (eg rclone copy /tmp remote:tmp)

start /B rclone mount Data_Pool: z: --vfs-read-chunk-size 1M --vfs-read-chunk-size-limit 10M --cache-dir a:\rclone_Cache\ --vfs-cache-mode full --vfs-cache-max-age 1s --vfs-cache-max-size 5G --attr-timeout 1s --poll-interval 1s

I expect to be informed that these options are not great.

The rclone config contents with secrets removed.

type = union
upstreams = d:\SnapRAID_Pool\rclone_Pool\ e:\SnapRAID_Pool\rclone_Pool\ f:\SnapRAID_Pool\rclone_Pool\ g:\SnapRAID_Pool\rclone_Pool\ h:\SnapRAID_Pool\rclone_Pool\ i:\SnapRAID_Pool\rclone_Pool\ j:\SnapRAID_Pool\rclone_Pool\ k:\SnapRAID_Pool\rclone_Pool\ l:\SnapRAID_Pool\rclone_Pool\ m:\SnapRAID_Pool\rclone_Pool\
create_policy = eplus
search_policy = newest
cache_time = 1

A log from the command with the -vv flag

Log File

See link.

This is something to do with either the metadata caching or the file caching (enabled with vfs cache mode full).

I suspect the file caching since you have --vfs-cache-max-age 1s

Can you try replicating this without the union, so just mount c:\adirectory and we can try to rule some things out.

Sure, I will try this.

In the meantime I think I worded my issue poorly.

If I edit the text file using the normally mounted location in Windows (e.g. ```
i:\SnapRAID_Pool\rclone_Pool\Test1.txt) , but then open it from the location mounted by rclone (z:\Test1.txt) the file has extra spaces on the end until I open it once, close it, then reopen it.

I also suspect the caching has something to do with it. I added caching early in my testing because I was getting errors about caching in the console without it.

I tried just mounting with:
rclone mount I:\SnapRAID_Pool\rclone_Pool z:

If I then try to modify (to add some extra lines for testing) the file through Z: I get errors about needing to cache writes to use this. If I then go modify the file through I: directly and then open through Z: I get a bunch of errors about unexpected EOF and the file will not open in Notepad. So I have to add my extra lines before mounting or the file is not able to be opened. If I then mount it, remove the lines through I:, then open through Z:, I get the same error.
2021/01/25 09:00:04 ERROR : Test1.txt: ReadFileHandle.Read error: unexpected EOF
2021/01/25 09:00:04 ERROR : IO error: unexpected EOF

Can you try this with the other flags you were using initially? Then it is a like-for-like comparison.

Ok. I used this:

rclone mount I:\SnapRAID_Pool\rclone_Pool z: --vfs-read-chunk-size 1M --vfs-read-chunk-size-limit 10M --cache-dir a:\rclone_Cache\ --vfs-cache-mode full --vfs-cache-max-age 1s --vfs-cache-max-size 5G --attr-timeout 1s --poll-interval 1s -vv --log-file=testlog.txt

I got exactly the same results as with the union. I also see in the log file that a bunch of the above settings are not supported for locals (several were added to try to resolve the issue).

PasteBin log of this.

OK that is progress - so this doesn't involve the union.

I think what is happening is that the metadata for the object is still in the directory cache with the old length when it gets opened.

Try adding --dir-cache-time 10s and make sure you wait more than 10s after the external edit of the file before trying to edit it through the mount.

I think this should work. You can set the --dir-cache-time low to avoid this problem - directory caching isn't winning you a lot since all your disks are local.

After some testing it does look like that fixed it. I wouldn't think it'd be related since it wasn't a directory issue but I guess I was mistaken.

While we're here can you comment on my other settings at all? I'm just using this as a file server combining all the storage drives into one. It's mostly the chunk size settings and the cache times I'm concerned about.

Thank you so much for your help!


In general editing things that a mount with --vfs-cache-mode full points to, not through the mount is asking for trouble until the item has fallen out of the directory cache.

I'll put comments on the flags

--vfs-read-chunk-size 1M   - not needed for local transfers
--vfs-read-chunk-size-limit 10M - not needed for local transfers
--cache-dir a:\rclone_Cache\
--vfs-cache-mode full
--vfs-cache-max-age 1s - this is up to you but setting this short makes me think you could use `--vfs-cache-mode writes` instead of `full`
--vfs-cache-max-size 5G
--attr-timeout 1s - this is OK but you'll probably find you don't need it
--poll-interval 1s - this is not used for local backends
1 Like

I think I switched to full caching because I was getting errors on reads without it but maybe it was actually writes and I am remembering wrong. I will try it out.

Maybe I should mention that the cache dir is a ram disk, if that makes any difference. I think maybe with the chunk size thing I was thinking it would read and write in and out of the cache in smaller chunks, preventing the cache from filling up as much (since I only have 16GB RAM in the system and only assigned 10GB max to the ramdisk, it allocates dynamically).

The vfs chunk sizes are those requested over http and don't bear any relation to those saved on disk.

Good to know. Thank you.

Are there any other remote types it doesn't apply to?

It applies to all remotes, but it is all about decreasing the transaction size which will make very little difference to local remotes. It will affect all other remotes though.

1 Like

Back with another question... maybe I should make a new topic?

I am wondering about the behaviour of the policies.

I was using EPLUS but ran into an issue where 1 directory was larger than the size of the drive and completely used up all the space in the drive and then starting giving a pile of out of space errors. I had assumed it would prioritize the existing path policy and then move to the next drive if the drive fills up but I guess I was wrong there?

I have now moved to LUS. But now I'm wondering if the used space measurement is a percentage or the actual used space. If it is actual used space I may end up in a situation where a smaller drive is completely full but still has less used space than a larger drive. Would it move on to the larger drive with space available or would I get out of space errors again?

I am avoiding using MFS as this seemed to prioritize my larger drives and I'd prefer to use the smaller drives first but maybe I am mistaken there and it just seemed that way.

Unfortunately there isn't feedback from the disk actually getting full - rclone hasn't standardised the disk full messages across backends.

So if there is an existing file then it will keep writing it to the place where it was first found.

It is in actual bytes.

I think it should move on to the next disk.

MFS will store stuff to disks with the most free space which will be your larger disks by default.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.