You are correct and this is, as you've observed, because this is a filter.
Sometimes (depending on exactly what rclone is doing) it will just use that as a list of names to filter directory listings against, and sometimes rclone will actually find each file. This second usage could potentially generate error messages but that would only be on some of the uses of --files-from which would make it inconsistent.
What are you trying to achieve? Maybe there is a better way?
What flags do you use on your rclone copy command? We might be able to put an ERROR message in.
You could run rclone rcd then send individual copy file commands. This will probably be less efficient than using --files-from depending on the backend you are using.
Yes it does. It just won't notice if the source file is missing.
What flags do you use on your rclone copy command? We might be able to put an ERROR message in.
example of what I currently run, let me know if there's something I should adjust?:
rclone --files-from-raw $someFile --fast-list --checksum --log-file=$logFilePath --log-level INFO copy $backup['source'] $backup['destination']
You could run rclone rcd then send individual copy file commands. This will probably be less efficient than using --files-from depending on the backend you are using.
I was thinking about that and I agree with you, is it not going to establish a connection for each iteration of a file? sorry not sure how it's working internally.
--files from list I doubt that it will be bigger than 1-2k files, I'd say 5k tops
When I did 2kk rclone, I did get sometimes error message that the file I was trying to copy was no longer there. So I'm pretty much trying to achieve the same result.
Agian, no sweat if it's not do-able, it's just for my piece of mind.
With s3 using rclone rcd to copy individual files will be quick (possibly 1 extra transaction depending) so if you wanted a guaranteed OK/ERROR for each file you could do that with very little overhead.