Dest files not deleting during sync between GDrive and Backblaze

What is the problem you are having with rclone?

I'm running a pretty basic (I think) sync between a Shared Google Drive (src) and a Backblaze B2 instance (dest).

Using a test folder as control, I added files to Drive, ran sync, deleted those same files from Drive, ran the same command, and expected the files to be removed from dest, but they are not.

Run the command 'rclone version' and share the full output of the command.

rclone v1.66.0
- os/version: ubuntu 22.04 (64 bit)
- os/kernel: 5.15.0-46-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.22.1
- go/linking: static
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Google Workspaces Shared Drive (src)
Backblaze B2 (dest)

The command you were trying to run (eg rclone copy /tmp remote:tmp)

/usr/bin/rclone sync GDRIVE:"Master ECL" b2:"eww-media" --transfers=10 --progress --retries=1 --ignore-checksum --delete-before

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

[GDRIVE]
type = drive
token = XXX
team_drive = XXX
scope = drive
root_folder_id = 

[GDRIVE_PREMIUM]
type = drive
scope = drive
token = XXX
team_drive = XXX
root_folder_id = 

[Wasabi]
type = s3
provider = Other
access_key_id = XXX
secret_access_key = XXX
endpoint = s3.us-west-1.wasabisys.com
location_constraint = 1
acl = private

[b2]
type = b2
account = XXX
key = XXX

[b2_PREMIUM]
type = b2
account = XXX
key = XXX

A log from the command that you were trying to run with the -vv flag

Getting a lot of these:

2024/05/05 22:06:34 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: Quota exceeded for quota metric 'Queries' and limit 'Queries per minute' of service 'drive.googleapis.com' for consumer 'project_number:202264815644'.
Details:
[
  {
    "@type": "type.googleapis.com/google.rpc.ErrorInfo",
    "domain": "googleapis.com",
    "metadata": {
      "consumer": "projects/202264815644",
      "quota_limit": "defaultPerMinutePerProject",
      "quota_limit_value": "420000",
      "quota_location": "global",
      "quota_metric": "drive.googleapis.com/default",
      "service": "drive.googleapis.com"
    },
    "reason": "RATE_LIMIT_EXCEEDED"
  },
  {
    "@type": "type.googleapis.com/google.rpc.Help",
    "links": [
      {
        "description": "Request a higher quota limit.",
        "url": "https://cloud.google.com/docs/quota#requesting_higher_quota"
      }
    ]
  }
]
, rateLimitExceeded)

and a lot of these:

2024/05/05 22:06:46 DEBUG : pacer: Reducing sleep to 217.234608ms

welcome to the forum,

i am not a gdrive expert, but, as per rclone docs, should create your own client id+secret

1 Like

tho, not directly related to your issue, the config for Wasabi: is not correct.

provider = Other should be
provider = Wasabi

ah that's unrelated. that remote isn't being used.

I'm wondering if Rclone treats trashed Drive files as still in their original place until they're permanently removed from Trash?

if you delete a file, gdrive moves it to trash.

I am deleting files in Gdrive and expect to see them removed from dest (b2) but that's not happening. The deleted files are still in "trash" in Gdrive so I am wondering if rclone still sees them as "in their original location" until they're permanently removed from Gdrive trash

no. when you delete a file in gdrive, it is moved from the orginal location to trash folder.

According to this issue I just found, I am just an idiot:

https://github.com/rclone/rclone/issues/496

Apparently backblaze marks "deleted" files with a deletion marker. I was not aware of this. I just thought it was some weird duplication thing and backblaze was marking the duplicated files as duplicates (hidden)

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.