Move files from inside subfolders without creating the folder in destination?

So here's my problem

I have several files inside a set of subfolders.
Whenever I try to do

rclone move remote:folder1 remote:folder2 --include=*.jpg -P

it recreates all the subfolders inside folder1 again in folder2.
Basically I want to take all the files inside the subfolders and put it in the parent.

Is it possible with rclone move?

if i understand what you want, then i know of two ways

rclone lsf -R --files-only  --include=*.jpg remote:folder1  > thedirs.txt
for /f %%u in (thedirs.txt) do rclone copy remote:folder1/%%u remote:folder2

you could do a rclone mount remote:
then use file manager to select all files in a flat view of folder1
use file manager to move those files to folder2

That's what I'm doing but it's no reliable.

I'm using Windows to show . files, ctrl+c and ctrl+v.
Slow, no checks.. a bad solution.

This is all in a smb share from a rclone mount on my unraid server.

the cmd command you made might be the best solution.

sure, if you are using windows explorer, that is not reliable, i have not used it in many years.
i use double commander and that works well.

but the cmd is the way to go

1 Like

cmd is working

scratch all that.

For folders that have space on it, it's only using the first word even using the "remote:folder1/%%u"

EDIT: Solved it.
So I list the files into a txt, open the txt with notepad++ add " at the end of every line, easy done with replace all $ with ".
Run again but like this:

for /f "usebackq delims=" %%u in (thedirs.txt) do rclone copy "remote:folder1/%%u remote:folder2 -P

1 Like

Not currently. Rclone would need a --flatten command for this.

I've had about 10 requests for this recently so I'll put it on the TODO list!

1 Like

can you share what you have in mind?

--flatten would mean copy all the files to the root of the destination rather than in their hierarchy. It would be surprisingly useful at times, but a bit of thought needed as to how to handle files with the same name.

1 Like

sure, i have used the concept many times.
most file managers have that.

duplicate files would a problem.
guess a new set of flags would be needed.

--flatten-rename perhaps with --suffix
--flatten-allow-duplicates - for storage systems like gdrive

and how to handle that in the logs

I'd probably do a --flatten-mode flag with the default of do-not-overwrite

1 Like

that's good, if you do a beta, i will test it.

1 Like

I would be incredibly useful.

Depending on the software it takes 30+ minutes to scan 2000 folders within gdrive.
Windows explorer with rclone mount can take up to 10 seconds going into a folder and back to the previews one depending how many files you have.

If you just drop everything into a single folder this is way faster, less API calls to google.

This folder with 2000 files can be opened in less than 10 seconds.

So, this is really useful.

1 Like


Any news on this?
This would be really handy right now.

You can follow this issue:

Nice thank you Animosity, I'll keep an eye on that.

1 Like

It is on the list - all my spare cycles are going into a VFS refresh at the moment though!

1 Like

thanks, the merging of the cache remote into vfs, that is much needed by many.
let us know about any beta we can test?

the flatten can do done with a few lines of script for now.

Sure will! The end or the tunnel is in sight now I think (or is it the lights of an oncoming train...?).

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.