Hey everyone! I’m running into a problem set that I just can’t get around. Truth in advertising, I’m brand new to rclone, so it could just be a new user mistake.
I’m trying to use rclone with S3. I have access to a bucket where content is just synced from another location. To be brief, here is how the bucket is laid out (in essence):
bucket/folder1/./././
Bucket/folder2/./././
Rinse and repeat that type of folder for ~40k folders. Now, I pulled a text list of the full S3 URI for the 2500 files I care about.
Now here’s where my confusion comes. I built a python script to iterate over the text file and it downloads 100%, but one line at a time. Is there a way with rclone to thread that out?
I hope this makes sense. I’ll gladly clarify! Thank you!
I just wanted to thank you! I guess my patience is what was the failure...I did a dry run and it took 7min 19 seconds to even start the dry run. I guess sifting through a 500TB bucket takes a hot minute.