What is the problem you are having with rclone?
I have a few folders I’m mirroring to a remote on google drive.
These folders are getting pretty big and hard to deal with just because of the sheer amount of files in them. My plan is to write a script to reorganize the file structure creating folders and subfolders on them.
There’s over 350 topics. It dumps every hour. So... it grows...
These are a bunch of small files. On google drive, it took a long time to upload the initial version because of the limits associated with Google drive.
I’m wondering if there’s something I can do that will prevent me from having to upload all the files again that would just detect the files where moved to a folder deeper in the structure and basically issue the equivalent of a
mv or even a
I can do a mount, I can do a cache remote, I can fit all the data locally. I could get the structure locally, build a list of move commands and do it, but that seems like it would just be slower than doing it locally and mirroring (connections have to be stablished + os waiting)
I’m just looking for ideas and insight as to what would be the most performant way of doing it.
What is your rclone version (output from
Which OS you are using and how many bits (eg Windows 7, 64 bit)
MacOS. Have Linux available as well.
Which cloud storage system are you using? (eg Google Drive)