How to not ignore duplicate folders? (Google Drive/GSuite)

I have a Google Drive file system full of duplicate folders. God only knows why. We may never know…

I am trying to migrate it down to a NAS using rclone. When rclone encounters a duplicate folder, it seems to simply ignore the contents.

I have already used rclone dedupe to clean up the massive number of duplicate files. So far so good.

How can I get rclone to merge the multiple diplicate folder contents into a single folder structure on the NAS?

Here is what I see. Assume GDrive has the following structure:

Folder/file1.txt (duplicate folder 1)
Folder/file2.txt (duplicate folder 2)

rclone copy gdrive:test /share/test --log-level INFO
2017/08/10 13:20:59 INFO  : Local file system at /share/test: Modify window is 1ms
2017/08/10 13:20:59 NOTICE: Folder: Duplicate directory found in source - ignoring
2017/08/10 13:20:59 INFO  : Local file system at /share/test: Waiting for checks to finish
2017/08/10 13:20:59 INFO  : Local file system at /share/test: Waiting for transfers to finish
2017/08/10 13:20:59 INFO  : file3.txt: Copied (new)
2017/08/10 13:20:59 INFO  : file4.txt: Copied (new)
2017/08/10 13:20:59 INFO  : Folder/file1.txt: Copied (new)
2017/08/10 13:20:59 INFO  : 
Transferred:     18 Bytes (10 Bytes/s)
Errors:                 0
Checks:                 0
Transferred:            3
Elapsed time:        1.7s

ls -R /share/test
Folder/    file3.txt  file4.txt

The net result is the duplicate folder with file2.txt did not get copied.

How can I solve this problem?

I very recently made rclone dedupe fix duplicate directories. So download the latest beta and have a go with rclone dedupe.

Works great on my test data set. Exactly the functionality expected!

Now to test with a large data set… seems to be taking a long time. I’m patient…

Thanks for creating this tool. Too bad Google makes it so necessary…


Yes I had several ideas to speed it up while writing it, but I decided to go for definitely correct for the first iteration!

:slight_smile: and :frowning:

Now if I could just get past the userRateLimitExceeded…