Performance Degradation between v1.56.2 and v1.57.0 when copying to Google Drive using "--max-age/--min-age" AND "--fast-list"

What is the problem you are having with rclone?

After upgrading to Rclone v1.57.0 from v1.56.2, I noticed that the time Rclone is taking to gather the list of files to copy to Google Drive is significantly slower when using "--max-age/--min-age" filters and the "--fast-list" flag.

I assume this is a result of the following change as specified in the v.1.57.o release notes:
"Speed up directory listings by constraining the API listing using the current filters"( #5023).

What is your rclone version (output from rclone version)

rclone v1.56.2

  • os/version: ubuntu 16.04 (64 bit)
  • os/kernel: 4.4.0-210-generic (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.16.8
  • go/linking: static
  • go/tags: none

AND

rclone v1.57.0

  • os/version: ubuntu 16.04 (64 bit)
  • os/kernel: 4.4.0-210-generic (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.17.2
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Google Drive (via crypt)

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone --dry-run --min-age 2d --max-age 3d --transfers=4 --checkers=8 --user-agent myagent --drive-chunk-size 128M --fast-list copy /local/directory/path/ gcrypt:destdir/

The rclone config contents with secrets removed.

Not config related.

A log from the command with the -vv flag

bin$ ./rclone --version
rclone v1.56.2
- os/version: ubuntu 16.04 (64 bit)
- os/kernel: 4.4.0-210-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.16.8
- go/linking: static
- go/tags: none

----- v1.56.2 WITH FAST-LIST -----
bin$ ./rclone --dry-run --min-age 2d --max-age 3d --transfers=4 --checkers=8 --user-agent myagent --drive-chunk-size 128M --fast-list copy /local/directory/path/ gcrypt:destdir/
2021/11/17 16:14:00 NOTICE:
Transferred:   	          0 / 0 Byte, -, 0 Byte/s, ETA -
Elapsed time:         5.5s

----- v1.56.2 WITHOUT FAST-LIST -----
bin$ ./rclone --dry-run --min-age 2d --max-age 3d --transfers=4 --checkers=8 --user-agent myagent --drive-chunk-size 128M copy /local/directory/path/ gcrypt:destdir/
2021/11/17 16:14:56 NOTICE:
Transferred:   	          0 / 0 Byte, -, 0 Byte/s, ETA -
Elapsed time:        35.4s


bin$ rclone --version
rclone v1.57.0
- os/version: ubuntu 16.04 (64 bit)
- os/kernel: 4.4.0-210-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.17.2
- go/linking: static
- go/tags: none

----- v1.57.0 WITH FAST-LIST -----
bin$ rclone --dry-run --min-age 2d --max-age 3d --transfers=4 --checkers=8 --user-agent myagent --drive-chunk-size 128M --fast-list copy /local/directory/path/ gcrypt:destdir/
2021/11/17 16:16:37 NOTICE:
Transferred:   	          0 B / 0 B, -, 0 B/s, ETA -
Elapsed time:       1m0.8s

2021/11/17 16:17:14 NOTICE:
Transferred:   	          0 B / 0 B, -, 0 B/s, ETA -
Elapsed time:      1m38.5s

----- v1.57.0 WITHOUT FAST-LIST -----
bin$ rclone --dry-run --min-age 2d --max-age 3d --transfers=4 --checkers=8 --user-agent myagent --drive-chunk-size 128M copy /local/directory/path/ gcrypt:destdir/
2021/11/17 16:46:09 NOTICE:
Transferred:   	          0 B / 0 B, -, 0 B/s, ETA -
Elapsed time:        37.6s

In this particular use case, there are zero files that meet the criteria on the source side so the total elapsed time is the time to get the directory listing and compare the files on source and destination.

As you can see, there has been a dramatic increase in time from 5.5s (v1.56.2) to 1m38.5s (v1.57.0) for the same command between the releases when using "--max-age/--min-age" filters and the "--fast-list" flag.

I am not sure if this is expected behavior or not.

Config would matter here.

Are you using a personal drive or a team drive?

Can you run the same with a debug log? That really has the info...

What the changes do in 1.57.0 is stop google drive sending rclone the files to check for min/max age - google drive should be doing that itself.

It would be interesting to see if that is actually working properly - can you try your commands with -vv --dump bodies and look at the actual listings returned.

What you should see is that 1.56 returns lots of stuff which is then filtered by rclone but 1.57 returns nothing as google is doing the filtering.

I did a quick test myself and I think with the --fast-list then you'll see lots of messages like this Google drive root 'test': Re-enabling ListR as previous detection was in error as the directories with no files in are triggering a workaround for a google drive bug in --fast-list.

OMG.
Optimization in rclone triggered retries in rclone.
I should have tested for that.

The fix is to disable optimization with --fast-list,
call SetUseFilter(ctx,false) in listRRunner() to reset the optimization flag.

Temporary workaround until the fix is merged - don't use --fast-list --min-age --max-age together.

[gdrive]
type = drive
client_id = redacted
client_secret = redacted
service_account_file =
token = {"access_token":"redacted","token_type":"Bearer","refresh_token":"redacted","expiry":"2021-11-19T14:10:09.181565757-06:00"}
root_folder_id = redacted

[gcrypt]
type = crypt
remote = gdrive:data
filename_encryption = standard
password = redacted
password2 = redacted

The Google Drive is a personal drive.

Submitted

Please check if the beta build below can solve your issue

https://beta.rclone.org/branch/fix-drive-filter-empty-dir/

Yes that would work.

We could also see if the see if the bug in Google drive is still there - maybe we can remove the workaround now.

--disable ListR might be needed too when doing recursive listings as rclone will automatically use ListR in those cases.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.