You can use the --max-age flag to only report files which are less than 1h old say - that could be part of the solution.
If you don't mind sometimes copying the files more than once you could (say) run this on the crontab once an hour.
rclone copy --max-age 1h10m s3:bucket sftp:server
Note the 10m overlap - decrease to decrease chance of copying twice but increase chance of not copying at all!
If you want to make a list of all files in the bucket then do
rclone lsf --files-only s3:bucket > files
You could script this, say you had an old-files from your last transfer then you could run this to discover new files which had been transferred.
rclone lsf --files-only s3:bucket > files
comm -13 old-files files > new-files
You can then transfer them like this
rclone copy --files-from new-files s3:bucket sftp:path
Then finally you'd do
mv files old-files
Not currently. If you search the forum and issues you'll see discussion of the --flatten
flag which is what you'd need.
You'd probably need to fix that up in the copy phase - stealing @calisro 's shell script
for f in `cat new-files`
do
rclone copyto s3:bucket/${f} sftp:$(basename $f) -vv
done
Note that it would be more efficient not to stop and start rclone lots of times as it will do the sftp negotiation each time so using rclone rcd
and rclone rc operations/copyfile
would be more efficient but that can be for phase 2!