Google Drive to Shared Drives bulk move

You should see usage in your google cloud console if you are using your own client_id and client_secret.

One thing you could try is slowing rclone down slightly, so set --tpslimit 10 and see if that reduces the 429/403 errors. Reduce the 10 to slow it down further.

Yes, correct I do see usage so I'm using my own clientid and secret.

What is the default tps value? Is it correct to assume that the current API limit is 100tps since API quota is set to 10000 in 100sec?
If that would be the case, what about using --tpslimit-burst 80 ? I don't want tpslimit to impact the overall migration time. If I understand the documentation correctly --tpslimit-burst would save up 80 tps during idle times and burst to 80 whenever possible and hold that.

Actually I forgot, the drive backend has its own variables for this. So you can try experimenting with

  --drive-pacer-burst int            Number of API calls to allow without sleeping. (default 100)
  --drive-pacer-min-sleep Duration   Minimum time to sleep between API calls. (default 100ms)

Which also shows the defaults.

Note that you can't necessarily get your 10000 in 100sec - there are other rate limits in play which google doesn't document, like a total queries per second limit.

1 Like

So this issue of the file owner not being part of the shared drive is popping up again. All the owners are now in a group that has view access on the shared drives, but I'm still getting that 403 error :confused:
Do the owners need to be added seperatly?

I'm having a pretty big issue related to the above. I did a MOVE command to migrate files that are owned by domain users. So your suggestion was to do a COPY afterward to just copy data that owned by external users that MOVE won't be able to migrate.

I assumed that copied data would be placed in the same file directory as the original Move command first created and it would not create a whole new file structure and place those copied files in. It looks like it just did that, I now have two folder structures one that contains Moved files and one with copied data, did that work as intended?

I would have expected it to put them in the same structure. Do you mean you've got two copies of each directory in the same location so in a single directory "subdir" and "subdir"? If so you can fix that with rclone dedupe.

If you mean that the directory tree is elsewhere then that probably means your command was wrong somehow.

1 Like

Exactly that. I'll make sure to use dedupe in the future. Why does this happen in the first place?

It happens because the flags enabling you to see the source files mean that rclone can't see the directories already exist in the destination.

So in your case --drive-impersonate account@domain.com applies to both "gdrive:foldername" and "shareddrive:foldername"

The way to fix this is either to put the impersonate into the gdrive config, or use the latest beta (shortly to become 1.55) which has a new connection string syntax so you could write something like

rclone  copy "gdrive,impersonate='account@domain.com':foldername" "shareddrive:foldername" --drive-server-side-across-configs --progress --create-empty-src-dirs --log-level DEBUG --log-file sd_foldername_log.txt

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.