Broad question: Moving a large number of single files from multiple locations to multiple locations

I wasn't sure what to tag this question with, it's not strictly a help and support question per say.

I was just wondering is there a better way of copying/moving a large number of single files in multiple locations to multiple locations, that doesn't involve a rclone copy or rclone move command for each individual file. That works but it's obviously slow which I don't particularly mind, just was wondering if there was a better faster way.

Also I didn't want to use rclone rcd just purely because when I did a test run moving one file which just happened to exist in the location, although I wasn't sure if the hashes matched, rather creating a duplicate as a rclone move command does, it instead just deleted the file and didn't create a duplicate which I would have preferred, so I could then do an interactive dedupe later.

@ncw I wish the Remote Control / API for operations/movefile worked identically to rclone move in that it creates duplicates if the file exists in the destination rather than just deleting it.

I am guessing you are asking about Google Drive? It's slow in creating small files as you can only do 2-3 per second. You can't tune around that.

Rclone move and rclone movefile through RCD should do the same thing as rclone. If not, you'd have to share a version, log, etc.

Yes google drive. Ah ok I will just leave it as multiple copy commands then.

The problem with that is there is no --dry-run for operations/movefile so everytime I do a test and a file with the same name exists it deletes the file on the remote that was to be moved. If there was a --dry-run option it would make it easier to provide an example.

If you have a repeatable bug, please share the steps, version, debug log, etc.

when i need to backup a large amount of small files to aws deep glacier, sometimes i do the following.
since running rclone sync oven a schedule, all those api calls get very expensive over time.

--- run 7z on the source
--- rclone copy backup.7z remote:
--- if needed, decompress remote:backup.7z

Thanks for the reply.

Sadly not able to do that as the files are in google drive, and I am wanting to move several files from several locations, but not all the files in each of the locations to multiple different locations.

The way to do this efficiently is with rclone copy or rclone move and use the filtering system to only include the files you want to copy.

For example you might have a list of filenames you could include in --files-from or a list of directories which you'd put in an include file like this

/dir1/**
/sub/dir2/**
/sub/dir3/**

And then include that with --include-from /path/to/includefile

I have done that in the past, but unfortunately that wouldn't work this time as all the files in the --files-from would be moved to the same folder? which isn't what I'm wanting.

On a side note it would be great idea maybe if there was a --files-from and a --files-to they would both have the same number of lines in each of the files, perhaps you could check this maybe? and do the copies/moves?

No, they will be moved to the folder heirarchy that they are in at the moment. so if a file is in a subdirectory a it will be moved to a subdirectory a.

You can do this fairly easily with the API so use rclone rcd and then call rclone rc operations/movefile with the params in the docs.

This is more efficient than running rclone lots of times, but less efficient than using --files-from or similar.

Oh I didn't realise that, still not what I was wanting however as was wanting to move a large number of small files to different folders than they are currently in.

The problem with that is that if a file exists with the same name, same checksum the file to be move isn't moved, it is just deleted instead.

I'd have preferred for a duplicate file to be created in google drive, and then I could do a dedupe later to determine which ones I wanted to keep.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.