I've been using rclone to copy over files from various users' clouds' (Dropbox, Box, Drive...) to my own AWS S3 instance. So far I've tried doing it in two different ways:
Using a "master" serverless function (AWS Lambda) that uses a queue to spawn "worker" lambda functions; each worker function then uses the copyto command to asynchronously copy over (~50) files.
The problem here is that after spawning 20 worker functions (~1000 files) the following worker functions hang and just timeout after a while. Is it Dropbox throttling the transfer or is it something else I'm not thinking about? I don't get any errors just a timeout after the lambda time limit...
I've also tried using the normal rclone copy command (because I assumed rclone took throttling under consideration) but it's kinda slow (~50min to transfer 9GB of data from a UK Dropbox account to our US East 1 S3 instance). Is there any way to speed it up besides the --no-traverse --fast-list flags?
Any advice would be immensely appreciated!