Hello, I know i'm late to the party here, but has anyone figured out a better/easier way to do this? I have 58 million files and can't execute rclone each time, and an include file would be crazy big. Is it possible to just set the object key = to the filename?
forgot to add a log, here is what it looks like with lsl -vv: just trying to get rid of the layer_test/layer2/ part, The file lives here on local, filter finds it ok, just can't have all this showing up in S3.
C:\R_Clone>rclone --no-check-certificate lsl leyb_large:inear-root/ -vv
2019/12/13 10:38:20 DEBUG : rclone: Version "v1.50.1-047-gec09127b-fix-netapp-creation-time-beta" starting with parameters ["rclone" "--no-check-certificate" "lsl" "leyb_large:inear-root/" "-vv"]
2019/12/13 10:38:20 DEBUG : Using config file from "C:\Users\pema9013\.config\rclone\rclone.conf"
19135956 2019-12-09 09:09:21.000000000 **layer_test/layer2/**21294_1000-1-1-1-8-1-1_1575900260.613.ts
2019/12/13 10:38:21 DEBUG : 5 go routines active
2019/12/13 10:38:21 DEBUG : rclone: Version "v1.50.1-047-gec09127b-fix-netapp-creation-time-beta" finishing with parameters ["rclone" "--no-check-certificate" "lsl" "leyb_large:inear-root/" "-vv"]
Thanks, Nick got me over a few hurdles so far, almost at the finish line here, much appreciated.
thank you Sir, you have been a great help. We figured out a way to loop through the folders and issue an rclone copy for each, we don't have to cd into every directory to do this which is great. I still owe you a pizza! Happy holidays :o)