Duplicate directory found in source - ignoring

This is a general question so did not fill the form.

So Gdrive allows duplicate folders if you view say in windows it would look something like this:



I "thought" if rclone see's one or more duplicate directories/files like this it would still check inside each folder and see it there are different files and sizes and merge them into the single folder so like this:

It moves FolderName and contents, then when it see's FolderName(1) it understands this a duplicate folder but still checks if the contents match FolderName and appends any difference from FolderName(1) into FolderName.

I am noticing though I get this message form rclone
Duplicate directory found in source - ignoring

So is it totally skipping FolderName(1) and FolderName(2)?

If it is actually skipping duplicate names is there a flag to tell rclone to merge the duplicate sources?

There are times when the 2 or more folders are different parts of the same upload and I was thinking the move command would auto merge.

Just trying to get some clarification on how it deals with duplicate Folder/File names with different (1) inside the names

Duplicate folders and files break many things.. Have to dedupe it.

Ah crap guess I have been deleting a lot of stuff I "thought" rclone was merging

Guess that is what happens when you do not pay attention to the output:(

Is there not a flag that could be made to tell it too move/delete anyway?

By that I mean move a folder from 1 drive to another and both drives have the same folder/files, it checks if the source and destination are the same if so it deletes the source, and only moves the missing folder/files

In the alternative, is there a way for me to get rclone to save to file say the stuff it ignored skipped?

Not in rclone as it doesn’t support duplicates.

We are looking at your other comment.

Check out

Yes, until the new change that Anymosity pointed out gets to the stable version, you'll have to keep an eye on the output and/or use the dedupe option of rclone.

I suggest you run it with the "--dedupe-mode list" flag and dry run. Then manually check the duplicates (unless there are so many of them) and copy them manually.

Good idea also to do a copy and delete afterwards once you are positive everything was copied, not a move.

Hope it helps.

I am not completely sure what you guys are working on, but would like to make a suggestion.

rclone right now understands that FolderName and FolderName(1) are duplicate folder names.

So my idea would be this flag.


so if I ran the command like this

rclone move remote1:/ remote2:/ --check-dupes

rclone would say check the duplicate files and folders and merge them into FolderName if files inside FolderName(1) are the same name and size in the destination, then delete the source.

Is something like this possible with rclone?

I understand we can run dedupe, but in my use case scenario I am dealing with literally thousands of TD spread across multiple workspace domains

Although I love rclone I do find the dedupe far to slow compared to other options. To be fair anything using using checksum check will not perform that great on very large collections.

For small collections I usually use an app called "everything" from voidtools.com which allows me to quickly see duplicate files, but again even that has some drawbacks.

For me the above solution would actually be the easiest and fastest available for probably all OS'es.

Since I am not a programmer I may be missing what is possible in coding!

I'd suggest checking the other thread and post there if you have comments on it.

I have just found out that it only detects duplicate file names.

It doesn't detect duplicate folder names like the ones in the example by @InfoR3aper.

It also doesn't detect when a file and folder have the same name (in the same directory) which is also possible on all the bucket/object based remotes (e.g. S3).

1 Like

Good catch! Thanks for letting me know.

I guess with the new rclone change from notice to error when a duplicate is found, the risk of data loss is reduced since the person running rclone will be forced to examine what is going on?

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.