Mount setting optimizations help needed

Hi,

I first want to say that the excellent work on the vfs-write-back setting is solving my overall performance issue with the rclone mount setup.
It improved the performance by almost 80% in my setup.

Currently I am still testing some things and am wondering if the following settings does make sense. Probably I can reduce some because of duplicates or just beeing ignored with the mount option.
It would also be good if you have any idea would also make sense additional to add for potential performance or stability.

Enabled the following settings:
--cache-dir /mnt/resource/rclonetmp
--cache-chunk-total-size 200g
--cache-chunk-path /mnt/resource/rclonetmp
--track-renames
--fast-list
--dir-cache-time 24h
--cache-tmp-wait-time 5s
--attr-timeout 4m
--cache-tmp-upload-path /mnt/resource/rclonetmp
--cache-writes
--vfs-write-back 5s
--vfs-cache-mode full
--transfers=128
--azureblob-chunk-size 100m
--checksum
--update
--checkers=128
--multi-thread-cutoff=32m
--multi-thread-streams=32
--vfs-cache-max-age 168h

Thanks a lot.

hello,
when you posted, you should have been asked for more information,

Yes sorry,

What is your rclone version (output from rclone version)

1.53.2

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Linux 64bit

Which cloud storage system are you using? (eg Google Drive)

Azure Blob

that is a lot of flags, be sure you need each and every one.
often the less flags the better.

these do nothing on a mount
--checkers=128
--fast-list

i start with the simplest command and then if needed add flags.

you seem to be using the cache backend, which is not recommended.
https://rclone.org/cache/#status

I reduced it to these options and also think that I removed the cache backend and only rely on vfs:
--daemon
--cache-dir /mnt/resource/rclonecache
--track-renames
--dir-cache-time 24h
--attr-timeout 4m
--vfs-write-back 5s
--vfs-cache-mode full
--transfers=32
--azureblob-chunk-size 100m
--checksum
--update
--multi-thread-cutoff=32m
--multi-thread-streams=32
--vfs-cache-max-age 168h
--vfs-cache-max-size 200g
--allow-other
--allow-non-empty

Probably you can give me some more tips

that is a lot of flags, be sure you need each and every one.

--allow-non-empty almost always a bad idea.

not sure but i think these do not nothing on a mount
--transfers=32
--checksum
--track-renames

are you sure you need?
--update

[quote="asdffdsa, post:6, topic:20065"]
--allow-non-empty almost always a bad idea
[/quote] Its needed for me and does more good than bad

[quote="asdffdsa, post:6, topic:20065"]
not sure but i think these do not nothing on a mount
--transfers=32
--checksum
--track-renames
[/quote] Good question

[quote="asdffdsa, post:6, topic:20065"]
are you sure you need?
--update
[/quote] Nope but i thought it would be good if i have time differences on the local system and the blob storage

How so? If you over mount something, it hides thing underneath and you don't know what's writing to where as it causes quite a number of issues. If you have a use case, I'd love to hear it as to why you have files there as once rclone mounts on top of them, you can't see them anymore.

The --transfers option on mount controls number of parallel uploads from writeback cache - so only affects uploads, and only when using cache mode writes or full.

See:


If rclone stops, like it already did some times, caused by memory issues or other errors, the application will just write to the local storage and do not stop at all. So far so good.
Its a clusterd application and it just says: hey where are my file copies, and just replicates the files once again to the local disk. Again good.

The local disk only has a few hundred GB and is getting full in one or two days. So if I face that rclone stopped by the alerting i just try to find out why and just start rclone OVER the old local files. The application says: Hey where are my file copies on this node, ok I find some old ones on the blob storage and and some are missing. So i am comparing all the files and place new copies of missing files to the blob.

If i do not use this flag, I need to remove the files. When i remove the files. the application says, hey I am missing my copies hier so lets replicate..... I never get to the point quick enough to cleare the files and start rclone. So i have to stop the application, which results in a quite long restarting process because the files needs to be distributed to all other nodes and not only some which might be missing when one mount stops.

Ok so it seems to be better to remove the transfers setting to have unlimited uploads also

I'm not following. If you have a mount say /rclone and something writes to it before rclone is mounted, it continues to write locally and if you over mount it (using non empty), it hides the files that process was writing and no new process can see those files on the mount /rclone until stop rclone which leads to file inconsistency. Say for example you had a text file and you write something to it, you mount rclone over top and your file is gone, you write some new stuff to the same file and you lose your changes etc.

If your mount is stopping due to issue, you should fix the root cause rather than over mounting and hiding things. I can't imagine the data inconsistency across your nodes with the current approach you got going so I say good luck!

Thats right, and the application will just replicate new consistent copies there. All good.

As I wrote, I try to find out why it stopped, try to solve it and start it again. If a cluster is not handling file inconsitencies I would be worried about :slight_smile: If a disk is running full and the files are corrupt and are replicated from the corrupt node to all others, what application cluster this would be? Trust me its working. Thank you for your help.

If you don't specify --transfers you will get the default value, which is 4. If you enable verbose output (-vor -vv) you should be able to see how many uploads are running at any time, if you want to verify.

Until a hidden copy replicates as these things happen. IT isn't a game of if but when so best to clean it up from the start and fix the root cause rather than bandaid over it. Like you said, it's your stuff and your data so you can do as you see fit.

That wont happen. It would take some time to go into detail here. Thanks for your commitment

If I had a dollar for every engineer I've spoken to over the years that uttered that :slight_smile:

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.