How do I move files from the root of a google drive into a subdirectory?

Okay so I’ve got my googledrive:
I’ve made subfolder googledrive:crypt
I’ve allowed googledrive to do one of it’s silly file transfers from a closed accounts, and it’s now vomitting all the files, without any directories into googledrive:
so I made a new subfolder googledrive:subfolder

But if I try rclone move --max-depth 1 --exclude “*crypt/**” googledrive: googledrive:subfolder
it gives an error, of course it does, can’t move files on overlapping remotes.

I then made an alias called subfolderalias: and tried to do the same thing, same error.

So what do I do? I can’t make googledrive stop vomiting files rudely into root, and I can only manually copy 100 at a time, and there’s gonna be like 100,000.

My current plan is to use copy and then check and then delete, but this is a bit clumsy, ideally I’d like a command that will delete only after performing a check, just like rclone move does. right now the serverside copies are going fine, but I’m not sure when this vomiting behavior will stop, if it will ever stop. There will never be a time where I can copy, check, delete, and not worry that a new file will appear in between checking all 100,000 files and starting to delete all 100,000 files. is there some sort of command flag for rclone check that will delete the source file after the check is finished?

Something like rclone copy --checksum --delete-after <what I’m doing here>
would be really nice, except that flag seems to a) only delete the destination not the source and b) it’s worded like it only works with rclone sync not rclone copy?

In theory I could run rclone check --exclude “*crypt/**” --delete-after googledrive:subfolder googledrive:
but… I have a feeling this would delete the file regardless of if the check is successful? or it might just not work at all?

This is a bit unfortunate!

You can use rclone moveto to move the individual files, but if you’ve got 1000s that will take ages.

You might find it easiest to sort out with rclone mount and a bit of command line stuff. What you need to do is rename the files which rclone mount supports very well.

so is this the correct syntax?? you’re right this should work, just do each file one at a time, I’m just so bad at nesting my find arguments like this.

find . -maxdepth 1 -type f -exec rclone moveto --checksum {} googledrive:subfolder backslash;
or does it have to be?
find . -maxdepth 1 -type f -exec rclone moveto --checksum “{}” “googledrive:subfolder” backslash;
or am I totally getting the syntax wrong?

maybe even
find . -maxdepth 1 -type f -exec rclone moveto --checksum --exclude “*crypt/**” {} googledrive:subfolder backslash;
although the maxdepth 1 should render that redundant…

edit: oh note I put a backslash before the ; but this forum edited it out…
oh also of course using find on googledrive would require mounting it first, so maybe that’s not feasible, I could just wait a week and see if the vomit stops.

Personally, I would just use the GD Web Interface and drag and drop the folders in there if I had to move that many files.

You can use the --crypt-show-mapping flag if you have encrypted dirs to figure out what’s what.

The GD web interface returns an error (the little blue number of selected files to move changes to red and then it doesn’t work at all when I drag and drop it, do you know some way to cure this?) if i select to move more than 100-200 files at a time, this would mean I had to click literally 1000s of times to accomplish the task. In theory the correctly formatted find command will do it automatically. I’d have to mount it too though, and I’m not sure I can figure out both mount and find that well right now without help.

What you’re missing is that there’s no folders to drag and drop, just loose files. It’s literally dumping all the files into googledrive root, even if they were originally inside 4or5 subfolders, on top of that it’s making duplicates of files sometimes too.

edit:
I tried letting it move 1000files at a time, even though the number turned red. Google chrome the browser itself froze, said it was unresponsive and I clicked wait repeatedly to try to let it finish. If you’ve ever successfully moved 1000 loose files at once with the google drive web interface please write back and I’ll try it over again. This was one way in which the ACD web interface may have actually been better than GD’s!

edit2: somehow my google drive’s total space used has gone down by several hundred GB during this process.
I’ll explain what I told GD web interface to do. I told it to restore the already deleted forever files to my secondary gsuites user account. It seems to be restoring them from no where, into my secondary gsuites user account, the actions are being logged as caused by the primary gsuites account, but for some reason, it’s also vomitting all these files into the primary account’s root directory, loose. These files are in theory already backed up into primarygsuitesaccount:crypt:subfolders but I wanted to test this undelete and run another check. I hope it’s not dismantling my entire directory structure! which is the only explanation I could think of for an undelete to LOWER disk space usage.

edit3: googledrive doesn’t have any automatic deduplication does it? I’m really getting scared that somehow I’ve given googledrive a command to undelete folder1 on user2, and it’s somehow dismantling copy of folder1 on user1 into loose files with no structure :frowning: and at a rate of about 10,000-20,000 files a day, why on earth are the web interfaces for all these services so unreliable compared to rclone, and why have I ever once been so lazy as to click a button on these web interfaces :frowning:

edit4: when selecting these files, it turns out they have two locations, their loose vomit location, and then the name of the real directory they should be in, but if I click that directory I don’t have access to it. Presumably because it doesn’t exist anymore. Similarly if I look at user1’s activity log it will list a couple dozen files have been moved by me to the trash. If I click the magnifying glass to move to the folder the file is in, it tells me the location doesn’t exist. Yet it will let me select the file from the recent activity panel. Weirder still logged in as user2 it says the same recent activity that user1 has moved something to the trash. This evening though user1’s drive disk space used has gone from 23.5 to 23.4TB so I’m worried that somehow, going to user2 and clicking “undelete all permanently deleted files” is somehow just deciding to go to user1 and delete them there? presumably for the goal of making them appear in user2’s trash folder? yet also somehow randomly appearing in user1’s root directory loose unsorted, with dual linked directories to the directory they should be in, only that link leads no where and the directory they should be in doesn’t exist… I should note before I deleted all 8tb of files from user2 I used rclone to download and move (not serverside move) those files from user2:crypt to user1:crypt… so… it seems insane and impossible that google has identified those files and is working to undo the move, consider the files were all downloaded and reuploaded separately.
I’ve now got 29,000 and 250GB of loose unsorted files in user1’s googledrive root location, because of course I do. Yet somehow 250GB of new files appearing has made overall google drive disk space used go DOWN by about 250GB (I’d have assumed up) then again maybe that disk space usage is counting against user2 not user1? (keeping mind the actual backup is only around 21TB and the other disk usage was likely in the trash can from deleted partial transfers and such). This is such a nightmare of paranoia that I never want to use googledrive’s website ever again for anything, and the worst part is, I didn’t even need to perform the undelete. An rclone check command verified 100% of everything was 100% fine, I just wanted to see if it would work. I had assumed it would only effect user2, and not user1 at all.

This is important to others, only because this means rclone move, when it detects that a serverside move is possible, it will create the same folder linking nonsense I’ve encountered here, and I’m not sure rclone should even allow a user to perform a google drive serverside move, so unreliable is googledrive’s serverside move command itself.

edit5: in a randomly selected directory I confirmed 57090 out of 57090 files still exist, so that’s ever so slightly reassuring. Maybe moving new files into the trash is just removing old files from the trash (which happened to be larger files) and I absolutely don’t care about my trash at all, since again, before I did anything silly on GD’s website I confirmed user1 had a complete copy of all my files with rclone check.

edit6: over the course of the past 20hours the loose nonsense files have gone from 43000 to 45000 files counted. My new theory though is that in order to undelete things back into user2’s trash bin, the files are being loosely dumped into user1’s root directory, and then copied directly into user2’s trash, but it’s also doing a bugged moveto where it actually just creates a folder link. however google drive trash bins don’t support a proper folder structure so essentially google drive is creating 100s of 1000s of symbolic links from unsorted loose files in a root into non-existent directories in another user’s trash bin… IMO this is a horrific bug, it’ll be continuing for weeks, if not months, and it’ll properly never do anything at all successful or useful. As far as I can tell though user1:crypt is totally undamaged by the massive number of new pointless files in user1’s root directory :frowning: also of course there’s no progress bar, and no way to cancel this operation. I could delete user2’s existence entirely, which would save me 8$, but I’m worried that’ll make this bug even worse though, even though it might also fix it. Of course the weirdest part is that user2’s disk space usage isn’t changing and never has (not when the files were deleted, and not now that they’re being undeleted).

However the fact that the disk usage on user1’s account has been going up and down, supports my above theories. It’s gone from 23.5TB to 23.4TB to 23.8TB to 23.7TB which supports the fact that user1 is being given new files (not by me) and then it’s loosing those new files (presumably towards user2). So, I think it’s time to cover my eyes with my hands, ignore this entirely, leave google drive’s website entirely alone for a month or so, and pray.

edit7: so, my core crypt folder seems to be fine, unharmed, but these loose files are still appearing… and… well they’re appearing VERY slowly, 1000-2000 files a day, with a goal somewhere around 100,000-500,000 so, yeah, I think I’m just going to try deleting user2, and praying that doesn’t generate an infinite loop, because I don’t want this pointless process to keep running for months on end, forcing me to pay for 2users when really I only need 1 (unless google forces me to upgrade to 5).

It’s a weird estimate though because the first day or so it went through 10,000+ files in such a fashion, now that it’s slowed down I could probably move them manually, but I’ve begun to suspect moving them manually was breaking whatever process it thought it was doing. I guess I’ll wait a couple days then try deleting the user2 account, I’m definitely afraid that doing so could make this process an infinite loop though.

So I’m still having trouble with this. The automatic process seems to have stopped a few days ago. So I’m trying to get rid of these files now.

I’m using this command.
D:\rclone-v1.41>rclone -vv --fast-list copy --max-depth 1 --exclude “crypt” --exclude “*crypt3/" --exclude “failed.gd3.copy” --exclude "*failed.gd3.copy/” “googledrive:” “failedgd3copy:”

However I get endless user rate limit exceeded messages. My reasoning is that the exclude’s aren’t doing enough work to avoid tps requests /crypt3/ has 2million files in it and even though it’s excluded I think that rclone is going over each individual file. I’m not sure this command will ever work. I might have to move or delete the 50,000 loose files in my root directory manually 50-200files at a time using the web interface.

Now I know users are warned not to ever place files in the root directory of their remote, but I didn’t put these files here, googledrive did it automatically for me. It would be nice if rclone had a solution for the problem of moving or deleting files out of the root of a remote server instead of simply giving up on the issue.

I tried --transfers 2 and it didn’t help, I guess I’ll try --tpslimit 1 next

In case it isn’t clear the goal with the above command is to copy the files, and then check them, and then delete them. Using copy instead of move to avoid serverside move issues where serverside moves create a symlink style link.
edit: yeah --tpslimit didn’t help at all, I seem to be banned from making serverside copies using this command, despite the fact normal rclone crypt uploads still work fine…

There is some kind of daily quota on server side copies I think - I expect you’ve hit that. Maybe 100 GB?

Did you try my suggestion of using rclone mount to move the files?

Actually you’re 100% right it’s 100GB and there’s only 500GB total and I’m now at 380GB or so progress. It seemed random and made no sense, but after the past two days of bans I’ve run size commands and it’s almost exactly 100GB.

I don’t know how dumping 500GB of random loose files into user1:root helped restore user2’s 8.8TB of files, but the process at least stopped (perhaps it crashed). So in two days I should have serverside copied all the files, and I can then hopefully run a check, and then delete, and then delete user2 before it starts this nonsense again.

Thank heavens it is nearly done :smile: - it has been a voyage by the sound of it!