The basic problem here is that a lot of cloud services have limitations in how fast you can send commands (API) or other limits on how many files you can open transfers for pr second. For example, Google Drive has a limit of about 2transfers/sec. These limitations can vary from service to service arbitrarily. I will use Gdrive here as my main example because that's what I know the most details about.
So if transferring 70K tiny files on Gdrive, yes - that will be very slow. Despite the total data not being much, you will very quickly run into the file transfers pr second limit. Large files on the other hand will likely max your bandwidth. If you archived those files before transfer they would go massively faster (hopefully in the future we will get a backend that can perform this transparently for the sake of performance on services like this).
On some providers with more lenient limits (like Wasabi S3, or Google Cloud) you can use a lot more transfers (than the default 4) to help remedy the problem significantly, but that won't help if you run into the limiters.
So TLDR: There is very little clone can do to fix this. These are limits set by the providers. If there were no limits, rclone could run as fast as you wanted.
As the the primary question of "why is PCloud so much faster", I don't know - because I don't know how Pcloud operates internally. It may be giving the impression of transferring much quicker than it actually is due to some sort of local caching (and rclone could do that too if you wanted), but unless it is using some sort of exclusive API with different limits it should run into the exact same limitations no matter the client used.
When we are talking about software I am unfamiliar with I can only really speculate about why you got your results. I'd need a lot more spesific data about the exact experiment setup to make a more educated guess.
Hypothesis: Pcloud may be one of those services that can handle way more transfers than the usual 4. If so, and if Pcloud uses many more by default - this would explain it. I have no idea what limitations Pcloud has (go google it a bit). If so you may make rclone perform equally by using a much higher --transfers 48 setting.
Oh there are ways to figure out why... try the more transfers as I said (I can't tell you the optimal transfers - you'd have to google for info). What I can say is that if one client can perform like this - rclone should be able to also, but it might require some adjustment to parameters.
Try using the -P flag on your rclone command when you test
It should give you some visual indication of how fast things are transferring and if most of the transfers just "hang ad wait" when you use really large --transfers or if they all are being processed concurrently.
BTW - using the rclone webGUI is functionally identical for the most part to using command-line. It's transrferring via mount that has some significant limitations you should be aware of - making the mount bad place to test. It effectively has 2 extra layers in between you and the cloud that are needed for OS compatibility.
NP. Do note that too many transfers may actually be counter-productive. There will always be SOME limit to what you can do, and it is usually optimal to stat right under that limit then to try to go way above it. More transfers isn't always going to be better. You may need to fine-tune that number, or find more info on the net about the exact limits you are working with.
Tip for larger files: To get much better bandwidth utilization on large files (for upload only), see if your backend supports a chunk-size flag. These are often small by default to keep memory footprint low, but small chunks significantly hamper throughput due to how TCP ramping works. I recommend using up to 64M if you have the memory for it - but BE AWARE that this much memoy can be used for EACH active transfer so if you used 100 transfers with 64M (6,4GB) you'd very likely run out of RAM and make rclone crash. Set it as appropriate for your workloads.