Google team drive, union remote handling teamDriveFileLimitExceeded errors

What is the problem you are having with rclone?

I'm using union remote to circumvent 400000 objects limit on Google Team Drives. However, this is working very clumsily. Apparently it is not possible to use either lno (least number of objects) or lus (least used space) creation policies to try to balance the files across different team drives in the union, only ff (first found) and derivatives seem to work. As a consequence, I had to mark a drive manually as ::nc (no create) once it becomes full. Shouldn't this be automatic, I mean the error is pretty distinct and hard to mistake for something else? Now it all stops, I have to notice it, edit the config and restart the process each time a drive runs out of objects.

Run the command 'rclone version' and share the full output of the command.

rclone v1.58.1

  • os/version: oracle 6.10 (64 bit)
  • os/kernel: 4.1.12-124.48.6.el6uek.x86_64 (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.17.9
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone --check-first --fast-list --transfers=20 --checkers=40 --tpslimit 10 --tpslimit-burst 20 sync -P xxx/ archive:xxx

The rclone config contents with secrets removed.

[archive00]
type = drive
scope = drive
team_drive = redacted
use_trash = true
chunk_size = 128M
acknowledge_abuse = true
server_side_across_configs = true
stop_on_upload_limit = true
stop_on_download_limit = true

[archive01]
type = drive
scope = drive
team_drive = redacted
use_trash = true
chunk_size = 128M
acknowledge_abuse = true
server_side_across_configs = true
stop_on_upload_limit = true
stop_on_download_limit = true

... up to archive09

[archives]
type = union
upstreams = archive04: archive05: archive06: archive07: archive09: archive00::nc archive08::nc archive01::nc archive02::nc archive03::nc
action_policy = ff
create_policy = ff
search_policy = ff

[archive]
type = crypt
remote = archives:
password = redacted
password2 = redacted
server_side_across_configs = true
filename_encryption = off
directory_name_encryption = false



A log from the command with the -vv flag

2022-06-01 06:19:20 ERROR : Google drive root 'logs': Received Shared Drive file limit error: googleapi: Error 403:
The file limit for this shared drive has been exceeded., teamDriveFileLimitExceeded
2022-06-01 06:19:20 ERROR : Google drive root 'logs': Received Shared Drive file limit error: googleapi: Error 403:
The file limit for this shared drive has been exceeded., teamDriveFileLimitExceeded
2022-06-01 06:19:20 ERROR : 2009/dlh/archive/msg/03/16/DLH01.comms-event.dlhbe.log.20090316.bz2: Failed to copy: goo
gleapi: Error 403: The file limit for this shared drive has been exceeded., teamDriveFileLimitExceeded
2022-06-01 06:19:20 ERROR : 2009/dlh/archive/msg/03/16/DLH01.core.dlhbe.log.20090316.bz2: Failed to copy: googleapi:
 Error 403: The file limit for this shared drive has been exceeded., teamDriveFileLimitExceeded
2022-06-01 06:19:20 ERROR : Cancelling sync due to fatal error: googleapi: Error 403: The file limit for this shared
 drive has been exceeded., teamDriveFileLimitExceeded

What would need to be done is that the drive backend would have to throw a specific type of error (say a fatal error) and the union backend would have to intepret it and put the remote into read only mode.

It isn't impossible but it is a fair amount of work. Happy to give hints if you'd like to have a go.

ah, "scratch your own itch" reply :wink: I was hoping that this would be an easy feature, something like --drive-stop-on-upload-limit, but oh well, it is what it is :wink:

Not a terrible itch for me since 400k is a lot of files so I only need to do this very seldomly, but I've been planning on having a go at go (heh), so this might be incentive enough, especially if there are other people interested.

:wink:

Making --drive-stop-on-upload-limit respond to this error too might be a good first step.

We'd then need to make the union check for the fatal errors on upload and mark the backend as read only.

Both of those things shouldn't be too hard.

If you are willing to have a go, I'll help you through it.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.