I want to copy files to my Google Drive by using the folder id of one publicly shared folder. But find that only part of files/folder can be seen by rclone. The way I use folder id is by creating one item in rclone.conf like this,
What is your rclone version (output from rclone version)
rclone v1.49.2
os/arch: windows/amd64
go version: go1.12.3
also
rclone v1.49.0-016-gf97a3e85-beta
os/arch: linux/amd64
go version: go1.12.9
Which OS you are using and how many bits (eg Windows 7, 64 bit)
Both Windows 10, 64 bit and Ubuntu 16.04
Which cloud storage system are you using? (eg Google Drive)
Google Drive
The command you were trying to run (eg rclone copy /tmp remote:tmp)
I have run rclone --config rclone.conf lsd src001:. It seems okay because all folders are listed.
But when run 3 times rclone --config rclone.conf size src001:
Total objects: 669
Total size: 19.793 GBytes (21252837599 Bytes)
Total objects: 659
Total size: 21.840 GBytes (23450518799 Bytes)
Total objects: 354
Total size: 22.185 GBytes (23820911775 Bytes)
It prints different size in different run.
A log from the command with the -vv flag (eg output from rclone -vv copy /tmp remote:tmp)
PS C:\AutoRclone> rclone -vv --config ./rclone.conf size src001:
2019/10/15 17:40:55 DEBUG : rclone: Version "v1.49.2" starting with parameters ["c:\path\rclone.exe" "-vv" "--config" "./rclone.conf" "size" "src001:"]
2019/10/15 17:40:55 DEBUG : Using config file from "C:\AutoRclone\rclone.conf"
Total objects: 500
Total size: 2.463 GBytes (2644766522 Bytes)
2019/10/15 17:41:13 DEBUG : 34 go routines active
2019/10/15 17:41:13 DEBUG : rclone: Version "v1.49.2" finishing with parameters ["c:\path\rclone.exe" "-vv" "--config" "./rclone.conf" "size" "src001:"]
add public shared folder to my drive, then use rclone size multiple times(my own account), and show different result each time, and the object count also not matched.
Are you using --fast-list to do that?
Is the data more consistent without fast-list?
It almost sounds like some sort of permission issue. @xyou365 : I think it would be worth investigating if the same thing happens if you use your own direct account (using Oath rather than a service-account).
Not that I think that fixes the issue necessarily, but it mgiht help us identify where the problem is.
Actually I use my oauth account to check the shared folder size rclone size, but it shows different size several time, I also use rclone tree output the folder structure, and found many sub folders missing, and serveral empty folders listed, not copy issue I think.
Instead I login to my web driver page, and check that all files are there, and folders not empty, it's really wired I think.
And fast-list?
I know there is a caching thing on fast-list that can fairly routinely cause recently added files and folders to not show. That usually fixes itself fairly quickly though so I don't think it is necessarily the problem we have here - but we should rule it out by doing a couple of rclone size without --fast-list
The unfortunate thing is it will take a lot of time without it, but do it for science!
Ok, well it's quite obviously not the problem I am aware of then. If it was then fast-list would always be smaller than regular list, and usually only for a short time. Your data just doesn't match that pattern.
Assuming your cloud-service supports fast-list though it can be a very nice flag to run just in general for large archives since it can make lots of list requests all at once rather than one folder at a time. For aprox 85K files I take 60sec for a full fast-list, but about 14-15min on a regular full list.
Would be helpful if you linked the tissue here, and also probably linked the thread on the issue page.
Makes it so much easier to reference what we talked about later on.