Hi.
I’m uploading a directory tree with about 400 000 small files in overall size of 7 GB (gigabytes). At first, the upload runs without any issues, however, when the uploaded size reaches 1.24 GB, the pace slows down significantly making the further upload impossible in a reasonable amount of time.
This behavior is regular. When trying several times, the result is the same — the pace drop at 1.24 GB.
I tried to upload the directory as a part of another one with files of a large size. The large files are uploaded with no issues, but when the turn comes to the problem directory, the situation is repeated.
I’m running rclone with `–log-level INFO’ and see the log messages in real time. The output doesn’t contain any errors, but I notice latency in amount of several second between the file transfer reports.
What can be the reason of this issue? How can I copy files in these circumstances?
Thanks.
Linux (Arch); rclone 1.39; Yandex Disk
2 Likes
probably api request throttling, could try tpslimit flag, although this will only help you out if you’re being throttled to 0, you also might just be unaware of what “reasonable” is… reasonable I’d say is around 50,000-300,000 files maximum per day for a cloud storage platform (because they’re trying to protect themselves from excessive users).
If a mechanism which limits a number of requests in a unit of time exists on the service provider side, what measures can I take to bring this number down on the rclone side? Can I, for example, somehow package several small files in one request and send them that way?
Well --tpslimit will literally limit the rate of api requests to integer# number per second.
Also yes. If large files work well and small files work poorly an image file or archive file containing the small files will upload far more easily (but be perhaps more annoying for you to use in the future if you didn’t want an image or archive file in the first place).