I’ve been using Restic to backup to Wasabi for a few weeks and manage to Max out my Gigabit line with no issues ~96MB/s, however I seem to be having trouble with RClone…
If I start with a fresh test folder with a bunch of 600MB-4.2GB isos and do an RClone Copy, the initial stat is indicating ~100MB/s but after the next few, all the files are haulted at 100% and their transfer speeds decline. Eventually it uploads at about 14MB/s but the files are all stuck at 100% for the whole time. I haven’t taken the time yet to inspect the traffic at the router level to see if rclone is simply just misreporting, or if the transfers are just hung.
So far I’ve played with the “–s3-chunk-size”, “–s3-upload-concurrency”, “–transfers” and “–timeout” options, but I’m not getting anywhere close to my line speed.
Can anyone share their configuration for Wasabi that works for them?
I’m not sure why the uploads should be stuck at 100% - it might be a display problem. In fact if you are using crypt I fixed a but exactly like this for 1.43 so it might be worth giving that a go if you haven’t already.
Those should be good options for speeding things up - I’d be suprised if they didn’t.
I just checked I was using 1.42 and crypt, but thought I may have tested without crypt too. I did also see the same behavior on a locally run Minio Server too (where it would sit at 100% stalled unless I reduced the transfers/concurrency down to the defaults). Uploading to B2 seems fine by comparison.
I will try 1.43 and report back and will gather some better info if its still playing up.
I’m pretty sure Wasabi isn’t throttling, as I was able to use Restic before and after using Rclone to max out my line.
@ncw so I guess I never tried without Crypt. I get 100MB/s without Crypt on 1.42 and it seems like with 1.43 I get the same with or without Crypt. So 1.43 seems to have solved whatever issue I was having.
The transfers still linger at 100% but I’m getting fast speeds now.
It seems like maybe the % calculation includes ongoing transfers instead, so if for example I have a 800MB file and I allow 8 concurrent uploads of 100MB chunks, it shows 100% complete immediately, but it really hasn’t transfered everything yet. At least this is what it feels like.
Out of curiosity what is the interplay between --s3-upload-concurrency and --transfers. Does --transfers 8 act as a global limit on the number of transfers regardless of what you set for concurrency? Or is it the limit of ‘files’. I guess I’m trying to figure out if --s3-upload-concurrency 8 --transfers 16 could result in 128 active upload threads or just 16.