A good benchmark is often to see what limits there are on the amount of transfers that can be initiated ni a short time-frame.
ie. make a few hundred copies of a tiny text-file and set transfers to 32 or 64 - and see how it copes with it. How long does it take to complete? Since the file-size is trivial this really only measures the API and hard backend limits (which is where performance often is capped on services that don't charge egress or pr-operation).
I don't think the bandwidth will matter much in such a test.
latency to the datacenter would matter more - if you knew where it was - but I think you will quite quickly get a feel for how many connections it will allow you regardless. We don't need a scientifically accurate test here - just get a general idea
(although some full compilation comparing this on all the major providers would genuinely be really useful data)
For example, Gdrive will allow 2-3 new transfers a second, but has that sweet affordable flat-rate for unlimited with no use-charges.
Premium pay-pr-use like Backblaze, Gcloud and the like seem for the most part to be unrestricted on this front and can easily deal with 32+ (at which point things like latency start to really factor in as limiting factors)
The question is, since this is not a pay-pr-use service - where does it fall between these extremes?
Oh... but you did of course use --transfers 64 (or some other high number) right?
Because otherwise 4 concurrent is rclone's default...
Usually the concurrent amount of transfers is not restricted (or at least the cap is so high it does not matter in practice). Even Gdrive has no issues in this. The restriction is usually in the amount of transfers you can start pr second.
The reason this is an important metric to know is that this basically determines how much performance you can get when files are relatively small. You may then end up hitting the "new transfers pr second" limit long before you fully use your bandwidth, which can be frustrating.
PS: On a related topic... NCW has implemented a new beta for a request I made quite a while back.
Most of you probably know you can use --order-by to sort the transfer-order now (by size,asc for example).The new thing is that NCW introduced a mixed mode now that can do a mix of the largest files AND the smallest files - thus maximizing both bandwidth and connection-rate for the full transfer (in scenarios where the transfers contains a mix of large and small files obviously). I am currently doing some testing on it now, but I expect it will make it into public beta soon.
But to get a useful number out of it - do this:
transfer 1000 files (as small as possible) with --transfers 64 or higher.
Note the total time it takes to finish.
The new transfers limit will then be the approximately 1000 divided by number of seconds to complete.This will be a lot more accurate than trying to just eyeball it from the output of -P
rclone copy C:\testfiles filebase:\testfiles --transfer 64 -P
(use a new empty location to copy to of course)
GOAWAY ERROR, was due to sending, 1000,5000 upload requests
wouldn't it be scary if wasabi was using sia storage , just saying because parts of the UI look the same on filebase and wasabi like where the API keys.