Copying from Gdrive Shared Folder to Seperate Gdrive

What is the problem you are having with rclone?

Copying from "Gdrive Shared Folder" to seperate "Personal Gdrive" is taking very long time and doesnt start with the first file. Looks like im hitting API limit? Source folder is ~5TB with a fair amount of folders. Destination is 1 folder that presently is empty. If i specify the subdirectories until i have no more subdirs in the source path the transfer starts... Do i need to use special flags to do this properly? Please note i changed the root id for privacy in case you see it different every time.

rclone copy -P sharedfolder,root_folder_id=1Q-ExxxX2xxxxxxOhOxxxxxxxxxB: personalgdrive:"_SharedDISC" --transfers=2 --bwlimit=3M --exclude=.*jpg -vv

Run the command 'rclone version' and share the full output of the command.

rclone v1.60.1
- os/version: debian 10.10 (64 bit)
- os/kernel: 4.19.0-22-amd64 (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.19.3
- go/linking: static
- go/tags: none

Using a normal free google drive that has the "shared with me" folder. Destination is my personal gdrive that is a business account.

rclone copy -P sharedfolder,root_folder_id=1Q-ExxxX2xxxxxxOhOxxxxxxxxxB: personalgdrive:"_SharedDISC" --transfers=2 --bwlimit=3M --exclude=.*jpg -vv

The rclone config contents with secrets removed.

Paste config here

[sharedfolder]
type = drive
scope = drive
token = {"access_token":"secret"}

[personalgdrive]
type = drive
scope = drive
token = {"access_token":"secret"}

sharedfolder                 drive
personalgdrive             drive

A log from the command with the -vv flag

2023/05/26 05:30:29 INFO  : Starting bandwidth limiter at 3.500Mi Byte/s
2023/05/26 05:30:29 DEBUG : rclone: Version "v1.60.1" starting with parameters ["rclone" "copy" "-P" "sharedfolder,root_folder_id=1xxxxxxxxxxxxxxxxxPB:" "personalgdrive:_SharedDISC2" "--transfers=2" "--bwlimit=3.5M" "-vv"]
2023/05/26 05:30:29 DEBUG : Creating backend with remote "sharedfolder,root_folder_id=1xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxPB:"
2023/05/26 05:30:29 DEBUG : Using config file from "/home/user/.config/rclone/rclone.conf"
2023/05/26 05:30:29 DEBUG : sharedfolder: detected overridden config - adding "{0Htel}" suffix to name
2023/05/26 05:30:29 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: Quota exceeded for quota metric 'Queries' and limit 'Queries per minute' of service 'drive.googleapis.com' for consumer 'project_number:11819198189'.
Details:
[
  {
    "@type": "type.googleapis.com/google.rpc.ErrorInfo",
    "domain": "googleapis.com",
    "metadata": {
      "consumer": "projects/11819198189",
      "quota_limit": "defaultPerMinutePerProject",
      "quota_limit_value": "420000",
      "quota_location": "global",
      "quota_metric": "drive.googleapis.com/default",
      "service": "drive.googleapis.com"
    },
    "reason": "RATE_LIMIT_EXCEEDED"
  },
  {
    "@type": "type.googleapis.com/google.rpc.Help",
    "links": [
      {
        "description": "Request a higher quota limit.",
        "url": "https://cloud.google.com/docs/quota#requesting_higher_quota"
      }
    ]
  }
]
, rateLimitExceeded)
2023/05/26 05:30:29 DEBUG : pacer: Rate limited, increasing sleep to 1.515245257s
2023/05/26 05:30:29 DEBUG : pacer: low level retry 2/10 (error googleapi: Error 403: Quota exceeded for quota metric 'Queries' and limit 'Queries per minute' of service 'drive.googleapis.com' for consumer 'project_number:202264815644'.
Details:
[
  {
    "@type": "type.googleapis.com/google.rpc.ErrorInfo",
    "domain": "googleapis.com",
    "metadata": {
      "consumer": "projects/202264815644",
      "quota_limit": "defaultPerMinutePerProject",
      "quota_limit_value": "420000",
      "quota_location": "global",
      "quota_metric": "drive.googleapis.com/default",
      "service": "drive.googleapis.com"
    },
    "reason": "RATE_LIMIT_EXCEEDED"
  },
  {
    "@type": "type.googleapis.com/google.rpc.Help",
    "links": [
      {
        "description": "Request a higher quota limit.",
        "url": "https://cloud.google.com/docs/quota#requesting_higher_quota"
      }
    ]
  }
]
, rateLimitExceeded)
2023/05/26 05:30:29 DEBUG : pacer: Rate limited, increasing sleep to 2.042589511s

It is quite normal to get a rate-limit errors at the start of a large transfer because both transfers and checkers use the same google drive limited throughput API and looks like you have a lot to check.

Google drive supports --fast-list flag to shorten listing time. I would try to add this flag.

hi,

as per rclone docs, you need to create your own client id.
It is strongly recommended to use your own client ID as the default rclone ID is heavily used

It is quite normal to get a rate-limit errors at the start of a large transfer because both transfers and checkers use the same google drive limited throughput API and looks like you have a lot to check.

is there anyway to slow the checker or how many requests it makes? my destination folder is empty so im not worried about dupes yet. are you saying that if i just let it run eventually it will start to transfer once all of the checks are done? Im under the assumption that i will hit the request limits until it stops hitting the request limits meaning the checker is done and transfer starts?

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.