I have created a remote named Index whose actual size is 7.22 TB rclone size Index: always giving incorrect results
( Well to be honest not just slightly incorrect but does so by a very huge margin and even most absurdly gives different results each time )
I ran it 5 times
It gave results ranging from 80 gb to 127 gb
Thankfully rclone ncdu Index: works perfectly and gives me 7.22 TB every time i run it
( Posting pics of these both as proof )
Now if it was just a matter of checking size , i do always use ncdu and not bother much ( although ideally even size should give correct info as everyone is not aware of ncdu )
But the bigger indirect issue which comes out is that if we use rclone copy command , i have noticed rclone copies content what it detects according to size command . I won't call cloning is bugged just that it clones what content size command detects .. If rclone size debugging becomes tricky , i do reckon rclone copy use can algorithm to detect files to be cloned using ncdu as its 100% perfect in every scenario whatsoever
Now as i have mentioned in question , this issue doesn't happen in every case. To reproduce make sure that while creating a new remote.
You explicitly enter the root_folder id ( ie don't skip it )
( Could be any folder your account has access to )
Don't set this Remote as team drive
and ofc don't use service account
Then check size using rclone size/ncdu remote: respectively
I tried this with some other folders too and got similar results
What is your rclone version (output from rclone version)
rclone v1.51.0
Which OS you are using and how many bits (eg Windows 7, 64 bit)
os/arch: android/arm64
go version: go1.13.7
Which cloud storage system are you using? (eg Google Drive)
Google Drive
Here are the logs for rclone size command
rclone size Index: -vv
2020/05/08 14:11:54 DEBUG : rclone: Version "v1.51.0" starting with parameters ["rclone" "size" "Index:" "-vv"] 2020/05/08 14:11:54 DEBUG : Using config file from "/data/data/com.termux/files/home/.config/rclone/rclone.conf" 2020/05/08 14:11:54 DEBUG : Index: Loaded invalid token from config file - ignoring 2020/05/08 14:11:55 DEBUG : Index: Saved new token in config file
Total objects: 403
Total size: 96.457 GBytes (103570393644 Bytes)
2020/05/08 14:11:58 DEBUG : 21 go routines active
2020/05/08 14:11:58 DEBUG : rclone: Version "v1.51.0" finishing with parameters ["rclone" "size" "Index:" "-vv"]
================================================
Another test
Remote name - test2
Actual folder size - 167 GB
Yes in case of rclone size remote: , the object count changes like crazy (for reference check in question , that pic where 5 rclone size commands are present and object count kept changing )
ncdu has never given any wrong result yet , bet it td folder or shared . 100% accurate in every test case
size seems bugged if you enter a folder id and not set it up as team drive ( for me atleast , i mean from all of testings i can conclude this only. Because size command too gives accurate result if i set up remote as team drive )
Yes , 100% no one is touching any file , noone except me has access to them
Sure just make sure you enter the folder root id manually and don't set it up as team drive. And preferably test it on folder have decent content in it ( i mean not like just 1 big file )
Lemme explain once more on how to setup the remote to reproduce the issue
Create a new remote , and
when it asks to enter root folder id , just paste folder id of any folder you have access too.
a-Could be a folder from inside your team drive ,
b-could be a shared folder
c-could be any public folder on internet ..
Important is don't skip this step
When it asks to set up remote as team drive - Select No
Now test that remote size using rclone size remote: && rclone ncdu remote:
Yes i understand that logically if its a folder from inside td , one wouldn't set Remote up as non td after entering folder id
Mainly this could be shared folder or public folders on internet to recreate the issue
i just included td folder to setup as non td along with other two to cover every possibility on paper
When you experience the issue though, is it on a regular Google Drive or Shared (Team) Drive?
I've tried to recreate on a Drive and I cannot, which is why I''m trying to understand if it's always on a Shared (Team) drive as they do some funky things with indexing and such compared to a regular Drive.
Thanks for trying but i haven't tested it on a folder from my drive yet and i am not certain about my drive's folders
Although i can surely recreate it on a shared folder, public folder and team drive folder set as non td .
Lemme make this easy and share with you a public folder to try 1uXCWl9rGsndFg_jpQ6VHq59OcQYb-Et1
Create a remote and enter this when asked folder id and choose NO for setup as team drive
Then check size using rclone size and rclone ncdu
Its actual size is 6.54 TB which ncudu perfectly showed as always
Although rclone size is absurdly telling me its just 100 gb -150 gb ( yes as i had told originally , its not only massively incorrect but also showing variations each time in rclone size output )
Ah nice
Finally it gave different results for you as well while using rclone size and ls
And regarding using rclone about remote:
Please note that this command will not output the size of the public folder id which you have entered but rather about command always outputs the storage status of the google account you have authenticated the config...
Used: 115.366T
Trashed: 0
Other: 185.297M
This size is not of the public folder whose remote you have created but rather the storage status of your google account which you used to log in at end .. Similarly when i use rclone about, i get storage status of my google drive (15 gb basic )which i have used to authenticate rclone and not the public folder .So obviously there's no reason for rclone about remote: to be inaccurate as its not even touching the troubled area
rlcone size , ls these commands seem bugged for public and shared folders, while ncdu has never failed
Also i hope you tried rclone ncdu Test:
It hopefully might have shown you correct output
On another note , i also made some of my friends go through the test and they too reported similarly
Sure , i will test this in the latest beta and let you know
Also just curious , this --fast-list seem to have caused this issue it seems , i ran the rclone size with this --disable ListR along with rclone size and it gave accurate results
So is it possible it can skip files if someone uses this flag --fast-list while cloning ( rclone copy ...) ?
i had read it can improve speeds while rclone copy but now i am not so sure if its worth using it