Could reading / deleting / moving a large number of Google Drive files cause the calls quota to be exceeded?

I have about half a million text files on Google Drive. Using rclone in Windows 10 x64, a virtual disk is connected, the default settings from the instructions were used. API key has been received and connected. Google One 2 TB tariff. Windows Explorer, FAR Manager 3.0 ("Use system copy routine" is on), Total Commander 10 are used to work with files.

I'm wondering which of the following operations might cause Google to throttle?

  1. calculating the size of folders with tens and hundreds of thousands of files (for example, when the "Show total copy progress indicator" option is enabled in FAR);
  2. search for duplicates using the AllDup application (selected "File extension" and "File size" only);
  3. one-time deletion of tens of thousands of files using the AllDup application;
  4. search for empty and almost empty folders using the "Remove Empty Directories" application;
  5. mass deletion of tens of thousands of folders and files less than 1 KB using the "Remove Empty Directories" application;
  6. one-time bulk moving of thousands of small files within one cloud storage.

rclone v1.55.1
os/type: windows
os/arch: amd64
go/version: go1.16.3
go/linking: dynamic
go/tags: cmount

rclone.exe mount Google_Drive: h:

PS: The fact is that I do not know
7. what information about files Google gives to local file managers via the API / rclone without having to download a file, and
8. whether it is really true that when transferring within one virtual disk mounted with rclone, only the file address changes, but not is it downloading, deleting, and uploading to a new location?

I beg your pardon for the list of questions, but in our case they relate to the only problem that I described here, and to which I have not yet found an exhaustive answer. Also, perhaps they are more related to the peculiarities of the work of specific user applications, and the question should be asked to their developers. Unfortunately, they can send back to this resource, and they will also be right.

It seems to me that the discussion of the optimal rclone settings for our tasks is better placed in a separate topic?

1-5 are all just quota calls and without seeing any logs, it's hard to tell if you hit an issue or not. The normal quota limits are like 1 billion per day so it's unlikely you can ever hit that. You can check your quota page and see how you are doing and if you hitting are quota API related errors. Those are normal at times and happen and rclone backs off as it should.

6 - I'm not sure what you mean by One Cloud Storage. Google drive moving things usually do not hit any quotas. There is a documented 750 GB upload daily quota per user and a bunch of undocumented download quotas. Moves would be server side and consume nothing though other than API calls.

  1. Not sure what you mean by this. If you have a mount, you can see the metadata on a file without downloading it. If you want to read any parts of the file, you would download those parts.

  2. You mean a rclone mount? I'm not sure what a virtual disk means here. If you have a mount, it's back to 7 and you only download what you request to be read.

Are there any errors you have?

Yes, you read that correctly - I'm talking about moving objects within the Google Drive of one Google account. As opposed to moving between mounted drives of different google accounts.

And again, you understood me correctly, and answered the question. Simply, the X-plore File Manager app for Google Android can only collect data for several hours to start transferring thousands of files. So, I assumed that the information provided to third-party applications via the API may not have the size, or some other data, without which it is impossible to start moving files immediately after receiving a command from the user.

Yes.

You seem to have already answered this question earlier. If the Google Drive is mounted in Windows as a logical drive, the process of moving any number of files of any size within this drive occurs only on the server side, and should occur almost instantly. At the same time, Internet traffic is almost not consumed. Once again, the original location and destination of all files are on the rclone-mounted disk. Regardless of the file manager used for this transfer. Right?

Unfortunately, now at work there is no time left for experiments. I had to download the entire archive, and work with it the old fashioned way, through shared network folders.

I wouldn't guess what certain apps may or may not do so it depends on that app. I'm referring more so to rclone commands like copy/move and things that normally do a copy or move on a mount. If a certain tool does something odd, I can't speak to every application out there.

As I shared above, generally yes, but every application can do things how they want so I wouldn't say 'all'.

So the best way to track the activity of applications using an rclone-mounted cloud drive is the "Google Cloud Platform" / "APIs & Services" monitoring console?

Yes, depending on how granular you want to get, you can make client/secrets for each app and you can track any set of credentials.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.