Does --filter-from impact vfs/refresh performance?

What is the problem you are having with rclone?

Does excluding folder trees using --filter-from improve vfs/refresh performance when mounting? I tested vfs/refresh with and without filtering, and refresh times seemed similar. Does filtering only affect the rendered folder tree, or does it also prevent unnecessary API requests, reducing network load?

Run the command 'rclone version' and share the full output of the command.

rclone v1.65.2
- os/version: ubuntu 22.04 (64 bit)
- os/kernel: 4.4.302+ (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.21.6
- go/linking: static
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Google drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone rc vfs/refresh --rc-addr 127.0.0.1:5574 --fast-list --timeout 300m dir=foo recursive=true

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

Paste config here

A log from the command that you were trying to run with the -vv flag

Paste  log here

I'd think of it like this.

You have a filter.
You have a list.

To filter the list, you have to get the full list to see what you can filter and if something exists or doesn't exist based on your filter.

You can remove --fast-list as well as recursive=true already does that.

I tested an impact of --filter-from on vfs/refresh again using --dump=headers -vv, and turned out API requests with and without --filter-from are the same.

I have to check the source if filtered folder tree can be skipped on refreshing.

What problem are you trying to solve? Google API is basically unlimited on API hits.

Just trying to optimize vfs/refresh using filters. Google drive API is unlimited but many users in my group shares a same API via a single service account file. Also, polling is not supported as files are linked from multiple shared drives. So I thought we can save API usage for listing new contents if it obeys filters.

Are you seeing API issues though with retries?

In most cases it can be managed by pacer, but sometime it slows down up to couple hundred minutes.

Why not create more service accounts per user? Have them use their own client ID/secret? A couple hundred minutes seems like it's really hitting the per user cap if you being that far exponentially throttled down.

Actually we have several more service accounts running, which is still not enough to accommodate users in our group. The number of client IDs is equal to that of service accounts. Anyway, it is not allowed more service accounts for reasons.

Client ID/secrets and service accounts are different as you shouldn't be mixing them.

A service account has its own client ID/secret built in so the config should not have both in the remote.

I have no idea what that means.