You are correct and this is, as you've observed, because this is a filter.
Sometimes (depending on exactly what rclone is doing) it will just use that as a list of names to filter directory listings against, and sometimes rclone will actually find each file. This second usage could potentially generate error messages but that would only be on some of the uses of --files-from which would make it inconsistent.
What are you trying to achieve? Maybe there is a better way?
What flags do you use on your rclone copy command? We might be able to put an ERROR message in.
example of what I currently run, let me know if there's something I should adjust?:
rclone --files-from-raw $someFile --fast-list --checksum --log-file=$logFilePath --log-level INFO copy $backup['source'] $backup['destination']
You could run rclone rcd then send individual copy file commands. This will probably be less efficient than using --files-from depending on the backend you are using.
I was thinking about that and I agree with you, is it not going to establish a connection for each iteration of a file? sorry not sure how it's working internally.
With s3 using rclone rcd to copy individual files will be quick (possibly 1 extra transaction depending) so if you wanted a guaranteed OK/ERROR for each file you could do that with very little overhead.