Explicitly limit maximum size of remotes

I'm going to add KBFS (Keybase filesystem) into a list of remotes
The FS is mounted as FUSE and is located at /keybase
I have created teams to ab.. ahem to upload files there, but df -h reports wrong info:

$ df -h /keybase/team/redacted/
Filesystem      Size  Used Avail Use% Mounted on
/dev/fuse       250G  161M  250G   1% /run/user/1000/keybase/kbfs

$ rclone about /keybase/team/redacted/
Total:   250 GiB
Used:    160.332 MiB
Free:    249.843 GiB

According to their docs, a team can upload at most 100GiB while an user has 250GiB cap. Also the team (although redacted) only has ~60MiB while it reports 160MiB, which is usage of the user itself

This is probably because the filesystems can't return this info at subdirectrory-level (unlike rclone about can). But if I add it to the union backend, it will surely ignore teams' quota because it only reports user's usage and quota
The difference from Manually specifying maximum disk space · Issue #3270 · rclone/rclone · GitHub is that this topic is about to configure the limit of the backends itself but on VFS
Would it be nice to be able to use info from actual usage from the filesystem, and set some nice virtual total size, to prevent collapsing the backends rely on it

why don't you create a backend? well because their sdk looks complicated and docs are hardly found

P.S. rclone size obviously works

$ rclone size /keybase/team/redacted/
Total objects: 192 (192)
Total size: 53.673 MiB (56279825 Byte)

What should rclone do with this info - I'm not sure I am following you?

I mean a way to override the result of About()

There is this

  --vfs-used-is-size rclone size           Use the rclone size algorithm for Used size

Which I think is what you mean. However it is very inefficient as it effectively calls rclone size when you do an About() call.