I’m trying to maximise my upload speed. What I was doing previously is using a teamdrive with multiple users as each user gets a 750GB/day limit, but I found this messy as having multiple rclone move instances running at the same time moving lots of files slowly was messing up my IO e.g. 12 concurrrent rclone moves with bwlimit 9M.
What I’m experimenting with now is having one rclone move instance running at a time with a bwlimit of 80M and a max-transfer of 750GB. When one user hits the 750GB limit then the next user takes over i.e.
The problem I’m trying to avoid is the worst case scenario of 8 x 50GB files 99% transferred when the the 750GB cap kicks in, so I’ve wasted (8x50) almost 400GB of transfer. Once –graceful-finish is available this problem will go away
I’ve tried reducing the number of transfers to minimise the potential exposure, but I can’t get anywhere near 80MB/s then. Are there any other settings I should change so that I can get faster per transfer upload speeds with say 4 or even fewer transfers?
Thanks @ncw I did consider that, but I didn’t think I’d get the speed with one transfer. Still, I haven’t tried it and even if slower it’ll probably be better than having lots of uploads fail.
@ncw is there a way to only cap the number of files uploaded at 1? If there were, then I could guarantee no wastage without having to use --max-transfer
e.g. if I set --bwlimit at 80M then my max daily transfer would be 6750GB. 6750/9 = 750 , so as long as I have >9 users rotating, then I wouldn’t hit any caps (unless I was very unlucky and 1 user got all the big files in one day)
one transfer per instance didn’t get me the necessary speed without having to run 2 many instances, but 2 transfers each on 2 instances running on separate drives to ensure they don’t try to upload the same file is working well:
If you have a script pre-plan the files to upload by breaking them up into groups of slightly less than 750G, then you can feed those in with the “files-from” parameter and rotate users.
I think I’m ok as with my 2 transfers at a time from 2 concurrent rclone move jobs, so my wastage will be no more than 2 files at a time. I’m getting close to the upload speeds I want and my average file size is 4.5GB, although there will be times knowing my luck when it won’t be 4.5GB files but 50GB files!
I’ve found an even nicer solution without using --bwlimit or --max-transfer that allows me to upload at 90MB/s.
24x7 at 90M is just under 8TB/day maximum transfer. 8TB/750GB = 10.6 or 11 rounded up.
Because I’m adding new files at a slightly lower speed than 90MB/s, I’m just running >11 different user accounts in one upload script. No individual script will ever upload more than 750GB in one go so no need for max transfer, or in one day as long as I have more than 11 users running i.e.