Rclone about GTeamDrive not working

What is the problem you are having with rclone?

rclone about GTeamDrive: does not work
I have tested this on 1.54.0 and v1.55.0-beta.5260.d260e3824
Did a bunch of searching and found This Forum Post showing that the issue was resolved in 1.51.
My main interest is for union (lus,lno) which the doc references rclone about will show that is supported.

Thanks

What is your rclone version (output from rclone version)

rclone v1.54.0

  • os/arch: linux/amd64
  • go version: go1.15.7

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Ubuntu 18.04

Which cloud storage system are you using? (eg Google Drive)

Good Drive(Team drive)

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone about GTeamDrive:

The rclone config contents with secrets removed.

[GTeamDrive]
type = drive
scope = drive
team_drive = XXXXXX
service_account_file = XXXX

A log from the command with the -vv flag

2021/03/12 23:43:51 DEBUG : rclone: Version "v1.54.0" starting with parameters ["rclone" "about" "GTeamDrive:" "-vv"]
2021/03/12 23:43:51 DEBUG : Using config file from "/home/XXX/.config/rclone/rclone.conf"
2021/03/12 23:43:51 DEBUG : Creating backend with remote "GTeamDrive:"
2021/03/12 23:43:51 DEBUG : Google drive root '': read info from team drive "GTeamDrive"
2021/03/12 23:43:51 DEBUG : 6 go routines active

hello and welcome to the forum,

what happens with rclone ls GTeamDrive: -vv

Team/Shared drives don't support About. As far as I know there is no API to query the quota (bytes used, bytes free) on a Team/Shared drive.

@ncw Thank you for your response.
is rclone size pulling from a different API?
I know on team drives, it is unlimited and no quota.
Size returns data:

Total objects: 6060
Total size: 2.XX TBytes (2XX Bytes)

What brought this up is that I was attempting to do rclone union mounts with lno and it triggers an error.

Is there any way to implement the api call for size into about/union when it is a teamdrive?

Thanks!

Rclone size lists every object and counts up the size which is too slow for normal use.

You can do this with the VFS layer

  --vfs-used-is-size rclone size           Use the rclone size algorithm for Used size.

But it will burn through your API allocation really quickly so I don't recommend it.

You'll need the latest beta for this.

Awesome!
I have the API capacity, with this work on an rclone move?

rclone move /mnt/local/GTeamDrive M-GTeamDrive:

Config

[M-GTeamDrive]
cache_time = 120
type = union
upstreams = _GTeamDrive_0:/ _GTeamDrive_1:/ _GTeamDrive_2:/ _GTeamDrive_3:/ _GTeamDrive_4:/ _GTeamDrive_5:/ _GTeamDrive_6:/ _GTeamDrive_7:/ _GTeamDrive_8:/ _GTeamDrive_9:/
action_policy = lno
create_policy = lno
search_policy = ff

It should do yes. It might be hideously inefficient though so be warned!

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.