I’m attempting to back up a set of many small files and directories (800,000+ files in 3000+ directories) to a crypted Google Drive on a regular basis. Only a few new files get added each day, so my thought was that I could run it daily (or ideally hourly) with the --max-age option to only transfer the most recent files since the last run.
That works, but my expectation was that by using, say, “–max-age 1d” rclone would only compare directories that have files in them with modification dates newer than one day, and therefore would be finished very quickly. However it seems that rclone still reads every directory from the remote before it does a comparison, even if the “–no-traverse” flag is used. I realize that “–no-traverse” doesn’t apply when syncing, but I’ve tried using “copy” and it still goes through every directory on the remote. This takes a very long time, when only a couple files are actually new and need to be transferred. Ideally this should only take a few seconds.
Is there a more efficient way to do this? I’ve thought about copying the new files to a temporary directory using “find” and then using rclone to “move” those files, but that seems kind of messy. I could mount the remote filesystem and use other tools like rsync, but I’m concerned about the reliability of copying files through the mount.
Is there a better option? Any suggestions would be much appreciated!