Combine Mount Options Query

Hello!

I'm looking for advice on whether Combine Mount options are applied globally or inherited to each child mount, and therefore which of these two following approaches could be the most 'optimal' (if there is such a thing!).

Setup
I have a half-dozen Dropbox drives configured, each with their own App ID configured. I have a separate SSD that I will use for VFS caching. Hosted server with a 10Gb link, content will be a mix of HD & 4K files so a range of sizes being uploaded+accessed. Total size has recently breached 100TB with many thousands of individual files.

I could either:

  1. Run as a Combine mount
  • Only one systemd unit required, slightly easier to manage/maintain.
  • I can set --vfs-cache-max-size to 90% of the drive and call it a day, very simple and efficient utilisation of the space.
  • I can only set mount options at the top level, such as --tps-limit, and these would be applied globally across the whole combine mount and not per sub/child mount, so am limiting max potential utilisation?
  1. Run as individual mounts
  • Multiple systemd units, minor difference in management.
  • Would have to calculate and set --vfs-cache-max-size individually and would be much harder to allocate out and cache for longer locally without potentially stepping on each other's disk usage.
  • Can set mount options on each drive for best theoretical limits available. Could help during Plex scans and so on as library is very large.

If my understanding is correct then they have their own advantages/drawbacks but I may be missing an aspect that swings it one way or the other, so looking for clarification or suggestions from the brain trust!

Thanks!

Your analysis looks correct to me.

--tpslimit is a global setting in rclone - it affects all the HTTP transactions that rclone makes.

However there is a setting in the dropbox backend which isn't currently configurable.

This makes it configurable with --dropbox-pacer-min-sleep or the config file setting pacer_min_sleep.

Setting it to 83ms should be the same as --tps-limit more or less.

This setting will then be per backend. There is no burst available in this pacer at the moment (equivalent to --tps-limit-burst).

v1.63.0-beta.7036.7e69eb2f4.fix-dropbox-min-sleep on branch fix-dropbox-min-sleep (uploaded in 15-30 mins)

Thank you! I'll give that version a go this afternoon.

Am thinking that running with the combine mount and pacer_min_sleep specified in each child backend's section of the config file will get me in the right direction.

Yes, that is the idea.

The pacer_min_sleep isn't quite the same logic as --tps-limit but it should be good enough I think.

For monitoring, could I double-check that if there were any issues they would likely pop up as a "too_many_requests" at the NOTICE log-level which I could grep for in the log file?

I usually run at --log-level INFO so if that's correct I'd not need to adjust further, otherwise if I do need to adjust to DEBUG for a while then I'll ensure logrotate keeps an eye on it as it would be growing a bit faster than usual.

notice log should be fine I think for monitoring.

Perfect, thanks.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.

I've merged this to master now which means it will be in the latest beta in 15-30 minutes and released in v1.63