question for you, we have many customers that store images within a shared http directory, and they recently disabled listings, but we can generate a file list of all of the images they have. What's troubling is I see in the documentation
If you are only copying a small number of files (or are filtering most of the files) and/or have a large number of files on the destination then --no-traverse will stop rclone listing the destination and save time.
However, if you are copying a large number of files, especially if you are doing a copy where lots of the files under consideration haven’t changed and won’t need copying then you shouldn’t use --no-traverse.
Some of these users have a million + files, and we only want to upload new/missing files from local to the google storage remote.
Do you have any suggestions to keep memory low and speeds hi while still keeping files in a somewhat synced state?
Filename might work, or even file size, and I'm not sure if checksum is supported on the http copy side of things.