When copying from a local disk to cloud s3, what would be the best option to use in order to put the least stress on the local system during the copy?
I am considering --bwlimit or --tpslimit for this, or perhaps just setting --checkers and --transfers very low.
Does anyone have any other suggestions, or experience to share on this topic?
Time is not that important, and the total data is around 20TB spread out over several directories that I can copy one by one if needed quite easily with filters or just sequentially whatever is best.
The command you were trying to run (eg rclone copy /tmp remote:tmp)
I did run it in our test-environment, and noticed the load come up quite a bit, but in all fairness I was using some more aggressive numbers for --checkers and --transfers instead of the defaults, so I will revert to use some default values instead and retest
I think the key is to establish some base line you can compare to your eventual tweaks.
So far it is rather counterproductive when on one hand you talk about limiting resources used and on other you increase default values to increase resources usage:)