/ is my “from” and gcs:bucketname points to a Google Cloud Storage bucket. I want the folder structure to be copied exactly as-is from / to the bucket but ONLY with the paths from the --files-from file.
When using --files-from I would expect rclone to go path by path and transfer those only. Instead, it appears to be scanning the “from” path and comparing it against the paths given by --files-from, as evident by the high-verbosity logs.
Is there a way I can stop it scanning and just accept the paths as-is? It’s super inefficient the way it’s acting right now. I explicitly give it paths to transfer but it’s still scanning every directory and filepath in the “from” path. That can’t be right, right??
What am I doing wrong? Do I need to --exclude=* before my --files-from or something?
The behaviour doesn’t really change when the paths are relative or absolute.
I tried without --fast-list and it’s still scanning the entire filesystem and every single one of my network mounts and everything.
With verbosity of 2 (-vv), I know this is the case because I see “DEBUG : Excluded from sync (and deletion)” spammed over and over for files that are not in my filelist. If at least it stopped at the / root it’d be tolerable but it does a deep recursive scan to every subfolder.
Does --files-from work as expected for GCS for anybody?