I wasn't sure what to tag this question with, it's not strictly a help and support question per say.
I was just wondering is there a better way of copying/moving a large number of single files in multiple locations to multiple locations, that doesn't involve a rclone copy or rclone move command for each individual file. That works but it's obviously slow which I don't particularly mind, just was wondering if there was a better faster way.
Also I didn't want to use rclone rcd just purely because when I did a test run moving one file which just happened to exist in the location, although I wasn't sure if the hashes matched, rather creating a duplicate as a rclone move command does, it instead just deleted the file and didn't create a duplicate which I would have preferred, so I could then do an interactive dedupe later.
@ncw I wish the Remote Control / API for operations/movefile worked identically to rclone move in that it creates duplicates if the file exists in the destination rather than just deleting it.
Yes google drive. Ah ok I will just leave it as multiple copy commands then.
The problem with that is there is no --dry-run for operations/movefile so everytime I do a test and a file with the same name exists it deletes the file on the remote that was to be moved. If there was a --dry-run option it would make it easier to provide an example.
when i need to backup a large amount of small files to aws deep glacier, sometimes i do the following.
since running rclone sync oven a schedule, all those api calls get very expensive over time.
--- run 7z on the source
--- rclone copy backup.7z remote:
--- if needed, decompress remote:backup.7z
Sadly not able to do that as the files are in google drive, and I am wanting to move several files from several locations, but not all the files in each of the locations to multiple different locations.
I have done that in the past, but unfortunately that wouldn't work this time as all the files in the --files-from would be moved to the same folder? which isn't what I'm wanting.
On a side note it would be great idea maybe if there was a --files-from and a --files-to they would both have the same number of lines in each of the files, perhaps you could check this maybe? and do the copies/moves?
No, they will be moved to the folder heirarchy that they are in at the moment. so if a file is in a subdirectory a it will be moved to a subdirectory a.
You can do this fairly easily with the API so use rclone rcd and then call rclone rc operations/movefile with the params in the docs.
This is more efficient than running rclone lots of times, but less efficient than using --files-from or similar.
Oh I didn't realise that, still not what I was wanting however as was wanting to move a large number of small files to different folders than they are currently in.
The problem with that is that if a file exists with the same name, same checksum the file to be move isn't moved, it is just deleted instead.
I'd have preferred for a duplicate file to be created in google drive, and then I could do a dedupe later to determine which ones I wanted to keep.