Best way to reduce local resource use when copying

What is the problem you are having with rclone?

Hi folks,

No problem, just looking for advise / experience :sunglasses::pray:

When copying from a local disk to cloud s3, what would be the best option to use in order to put the least stress on the local system during the copy?

I am considering --bwlimit or --tpslimit for this, or perhaps just setting --checkers and --transfers very low.

Does anyone have any other suggestions, or experience to share on this topic?

Time is not that important, and the total data is around 20TB spread out over several directories that I can copy one by one if needed quite easily with filters or just sequentially whatever is best.

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy

Start with default values and then see which resources use is too high for your liking. Post some results and somebody might have some good tips.

From my experience defaults are pretty good if I do not care about some specific optimisation.

Reading your post it is hard to guess what you want to limit - network bandwidth? IO? CPU? RAM?

Thanks for your advise.:pray:

Ideally all of those :sunglasses:. (to a reasonable extent).

Have you tested it already? Are you sure you have a problem? What is resources usage at the moment.

I did run it in our test-environment, and noticed the load come up quite a bit, but in all fairness I was using some more aggressive numbers for --checkers and --transfers instead of the defaults, so I will revert to use some default values instead and retest

I think the key is to establish some base line you can compare to your eventual tweaks.

So far it is rather counterproductive when on one hand you talk about limiting resources used and on other you increase default values to increase resources usage:)

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.