Rclone cannot see all files/folder in public shared folder

What is the problem you are having with rclone?

I want to copy files to my Google Drive by using the folder id of one publicly shared folder. But find that only part of files/folder can be seen by rclone. The way I use folder id is by creating one item in rclone.conf like this,

[src001]
type = drive
scope = drive
service_account_file = C:/AutoRclone/accounts/1.json
root_folder_id = 1OqnIlrj4BRgPcbFYRJTxIBSi7quzjax8

What is your rclone version (output from rclone version)

rclone v1.49.2

  • os/arch: windows/amd64
  • go version: go1.12.3

also
rclone v1.49.0-016-gf97a3e85-beta

  • os/arch: linux/amd64
  • go version: go1.12.9

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Both Windows 10, 64 bit and Ubuntu 16.04

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

I have run rclone --config rclone.conf lsd src001:. It seems okay because all folders are listed.
But when run 3 times rclone --config rclone.conf size src001:

Total objects: 669
Total size: 19.793 GBytes (21252837599 Bytes)

Total objects: 659
Total size: 21.840 GBytes (23450518799 Bytes)

Total objects: 354
Total size: 22.185 GBytes (23820911775 Bytes)

It prints different size in different run.

A log from the command with the -vv flag (eg output from rclone -vv copy /tmp remote:tmp)

PS C:\AutoRclone> rclone -vv --config ./rclone.conf size src001:
2019/10/15 17:40:55 DEBUG : rclone: Version "v1.49.2" starting with parameters ["c:\path\rclone.exe" "-vv" "--config" "./rclone.conf" "size" "src001:"]
2019/10/15 17:40:55 DEBUG : Using config file from "C:\AutoRclone\rclone.conf"
Total objects: 500
Total size: 2.463 GBytes (2644766522 Bytes)
2019/10/15 17:41:13 DEBUG : 34 go routines active
2019/10/15 17:41:13 DEBUG : rclone: Version "v1.49.2" finishing with parameters ["c:\path\rclone.exe" "-vv" "--config" "./rclone.conf" "size" "src001:"]

That is wacky!

Can you do this a few times

rclone --config rclone.conf lsf src001: | sort > test1.log
rclone --config rclone.conf lsf src001: | sort > test2.log

then diff the test1.log test2.log and see if you can work out what files are missing?

Is the shared folder on a Team Drive (forgotten their new name again!)?

@ncw I have used rclone tree it is randomly missing every time.

And the shared folder ID (1OqnIlrj4BRgPcbFYRJTxIBSi7quzjax8) pasted above is real :slight_smile:.

Can we share folder which is inside Team Drive? Thought only files can be shared with proper setting of domain admin.

same problem here

add public shared folder to my drive, then use rclone size multiple times(my own account), and show different result each time, and the object count also not matched.

Are you using --fast-list to do that?
Is the data more consistent without fast-list?

It almost sounds like some sort of permission issue.
@xyou365 : I think it would be worth investigating if the same thing happens if you use your own direct account (using Oath rather than a service-account).

Not that I think that fixes the issue necessarily, but it mgiht help us identify where the problem is.

Actually I use my oauth account to check the shared folder size rclone size, but it shows different size several time, I also use rclone tree output the folder structure, and found many sub folders missing, and serveral empty folders listed, not copy issue I think.
Instead I login to my web driver page, and check that all files are there, and folders not empty, it's really wired I think.

Have not used that flag. Simply rclone size and rclone lsd.

Thanks for the test of @ss4423 :slight_smile:. Ready to test just now.

And fast-list?
I know there is a caching thing on fast-list that can fairly routinely cause recently added files and folders to not show. That usually fixes itself fairly quickly though so I don't think it is necessarily the problem we have here - but we should rule it out by doing a couple of rclone size without --fast-list

The unfortunate thing is it will take a lot of time without it, but do it for science! :smiley:

Have not used that flag. But I added it just now. The problem is still there not better and not worse:

  • Without it rclone --config rclone1.conf size src001:
Total objects: 394
Total size: 19.094 GBytes (20501545124 Bytes)
  • With it rclone --config rclone1.conf size --fast-list src001:
Total objects: 469
Total size: 22.218 GBytes (23856466833 Bytes)

Ok, well it's quite obviously not the problem I am aware of then. If it was then fast-list would always be smaller than regular list, and usually only for a short time. Your data just doesn't match that pattern.

Assuming your cloud-service supports fast-list though it can be a very nice flag to run just in general for large archives since it can make lots of list requests all at once rather than one folder at a time. For aprox 85K files I take 60sec for a full fast-list, but about 14-15min on a regular full list.

Thanks for your sweet advice @thestigma.

Think the problem encountered for now is easy to be reproduced because the folder id is 1OqnIlrj4BRgPcbFYRJTxIBSi7quzjax8 :slight_smile:.

rclone size will use --fast-list automatically now-a-days so a test disabling it would look like

rclone --disable ListR --config rclone1.conf size src001

Can you try that?

Cool. It is normal now :slight_smile:.
rclone --disable ListR --config rclone1.conf size src001:

Total objects: 6493
Total size: 245.790 GBytes (263915257230 Bytes)

That is good, however this means that --fast-list on google drive backend has a bug :frowning:

Can you please make a new issue on github about this with a link to the forum so we can investigate further.

Glad to make issure (have already done that) :slight_smile:

Would be helpful if you linked the tissue here, and also probably linked the thread on the issue page.
Makes it so much easier to reference what we talked about later on.

Sorry I am new to use the function of forum. Just paste Github issue link here?

Yea. nothing fancy required. It's just so we don't have to go hunting for the relevant thread if we want to refer back to what was said.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.