Gdrive move multiple drive collisions possible?

I did not post the normal stuff here are this is a General question.

Say you have a bunch of Drives with a common folder on each.

Now each drive "may" have folders/files which are exactly the the same.

From my understanding when you do the move command rclone first compares the source and destination then starts its move process.

So in my example commands below if I execute each command line in a different window it will start the move process. The question I have is whether this can cause an issue with file collisions since I am running all the commands simultaneously and not in sequence

Here would be the example command lines run all in different windows

rclone -vP --transfers=40 --include=*.{mkv,mp4,m2ts,avi,m4v,mpeg,MKV,MP4,ts} move Drive1:/TV.SS/ TV-Master:/00.SS/ --drive-use-trash=false
rclone -vP --transfers=40 --include=*.{mkv,mp4,m2ts,avi,m4v,mpeg,MKV,MP4,ts} move Drive2:/TV.SS/ TV-Master:/00.SS/ --drive-use-trash=false
rclone -vP --transfers=40 --include=*.{mkv,mp4,m2ts,avi,m4v,mpeg,MKV,MP4,ts} move Drive3:/TV.SS/ TV-Master:/00.SS/ --drive-use-trash=false
rclone -vP --transfers=40 --include=*.{mkv,mp4,m2ts,avi,m4v,mpeg,MKV,MP4,ts} move Drive4:/TV.SS/ TV-Master:/00.SS/ --drive-use-trash=false
rclone -vP --transfers=40 --include=*.{mkv,mp4,m2ts,avi,m4v,mpeg,MKV,MP4,ts} move Drive5:/TV.SS/ TV-Master:/00.SS/ --drive-use-trash=false
rclone -vP --transfers=40 --include=*.{mkv,mp4,m2ts,avi,m4v,mpeg,MKV,MP4,ts} move Drive6:/TV.SS/ TV-Master:/00.SS/ --drive-use-trash=false
rclone -vP --transfers=40 --include=*.{mkv,mp4,m2ts,avi,m4v,mpeg,MKV,MP4,ts} move Drive7:/TV.SS/ TV-Master:/00.SS/ --drive-use-trash=false
rclone -vP --transfers=40 --include=*.{mkv,mp4,m2ts,avi,m4v,mpeg,MKV,MP4,ts} move Drive8:/TV.SS/ TV-Master:/00.SS/ --drive-use-trash=false
rclone -vP --transfers=40 --include=*.{mkv,mp4,m2ts,avi,m4v,mpeg,MKV,MP4,ts} move Drive9:/TV.SS/ TV-Master:/00.SS/ --drive-use-trash=false

I would not try it with move... maybe it will work. But if not you are done as files will be gone from source.

If you have to use move what I would do:

rclone move src: dst: --include [a-k]* --include [A-K]*
rclone move src: dst: --include [l-z]* --include [L-Z]*

every command deals with different files - they do not overlap.

and at the end one last

rclone move src: dst:

to pickup some odd remainers.

Appreciate the input

Your example will not work as it moves by names, which is of no use to me.

My process is about moving only video files and leaving the JUNK in place to be purged.

I do move directories at time using your example, but in this use case it would not do as I wanted.

This is why I am asking specifically about running the commands simultaneously.

I know if I run them in sequence it would do exactly as a I wanted.

I am just trying to see if I can speed up the process by doing them at the same time without causing issues.

I am not sure if rclone also checks as it goes, or if the source destination is checked at start and that is only what it goes by.

Meaning is the move command static or dynamic


Whichever it is your way will waste a lot of time trying to move the same files in different command instances. Even in theory it should no cause issues you are pushing boundary here. Who knows how it will behave when 10 moves compete to move the same file.

You can try:) My way is to play safe and fast. Give every command different job to do. and avoid them to try to do the same

I am confused what you are giving is not really any different than my original command?

In my example rclone still creates the folder if it not exist, if it already exists it then should be seeing IF the filename from the source is in the destination.

So I am not clear what benefit this does?


My main question is not about running a single command or looping through the commands.

I am trying to clarify if I have many sources where each source MIGHT have a duplicate if I run all 10 commands on 10 different sources at 1 time IF this will cause an issue.

Example is maybe I have 10 sources, where each source source might only have 1 file unique file in each source but when put into destination then it has 10 unique files in the folder.

I know if I run the command in a loop on 10 different sources it will do exactly what I want.

But what if I run all 9 or 10 or more sources against a single destination at the same time.

So you risk that all 10 commands will start moving the same source_file to the same remote directory as it does not exist yet on destination.

What happens next depends on how your remote handles it. The fastest win? or it will be created 10 times - overwriting faster version? Or it will create 10 duplicates? I can not tell.

Ok I get what you are saying now

You are using gdrive - so it can create duplications - as gdrive does not care about files' names. Of course you can dedup later. But more importantly your commands potentially do the same overlapping transfer without reasons - wasting time and API calls.

Wouldn't it be easier to delete junk only? using --exclude=*.{mkv,mp4,m2ts,avi,m4v,mpeg,MKV,MP4,ts}

rclone lsf src: --files-only --format p --recursive --exclude=*.{mkv,mp4,m2ts,avi,m4v,mpeg,MKV,MP4,ts} > fileList
rclone delete --files-from fileList

Not in this use case.

I have hundreds of drives containing similar content, so the whole process is about moving the content from all the different drives into a single TD where all the folders go with only videos in them.

I am not exactly sure WHY but a few days ago I ran the commands similar to the above and rclone was able to move 900,000 files onto a single TD.

From that experience it leads me to believe that although rclone say file moved, it only means that Google has started the move process and is handling it in the background.

Otherwise rclone should have been throwing over quota error like crazy.

I knew I could do this via but never realized if using rclone command in the right batch setup it would also do it.

Also the Source folders can have as much as 10 times file counts that are NOT videos, so literally doing it the way you say above would cause 10 times the API calls for the same process

Plus then I would still have the issue of Merging the contents of all the drives into 1

So moving video files only is literally the simplest and fastest and least amount of api calls required to complete the "merge" process where final content only has the videos.

Your idea of moving via folders and video includes does allow me to run 36 different moves at the same time though and knowing there would be no conflicts.

Basically by changing the starting directories via 0-9 and a-z and changing where each command starts using excel it is very easy to generate the batch command lines for all 36 windows to run at 1 time

Most things I do with rclone are in massive batches, and although GDrive allows dupes, rclone does not.

There are times I literally have 160 windows each running a different batch file for rclone to auto move/copy stuff around.

1 Like

Yes excel method in some cases beats overcomplicated scripts. I use it too often to generate batch processing scripts.

I am not a coder so using scripts is not something I have time to learn.

The thing I Love about using excel for everything I do with rclone is that I have a history of every command I have ever run with rclone going back 4 years.

rclone and excel are the perfect solution especially considering it allows me to track everything I used rclone for in the past.

But also sometimes the complicated moves I might forget what I did, so having it inside excel makes it easy for me to find the odd ball complicated things I did using rclone.

Also I never use any of the rclone spinoff tools, I tested a great many and I found that ALL of those tools can easily allow duplicate folder/files to be created.

rclone is the only tool I have ever found that prevents this!

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.