Is there anyway to tell rclone to upload to my remote cloud storage random garbage data from my ram?

Windows10, I've used rclone-v1.57 a lot but swapping to rclone-v1.59 didn't sovle this issue (and I didn't expect it to.)

Is there anyway to tell rclone to upload to my remote cloud storage random garbage data from my ram?

This sounds malicious but, well, I was getting 80-90megabytes per second upload/download to google drive and now I am getting 28-35megabytes per second.

The most likely culprit is that all my test data is on a fragmented hard drive.

So, I need a way to test rclone's speed that doesn't require my hard drive, it's the only way to rule hard drive slowness in or out as the cause here. The only other causes I can think of are router related (and I barely understand routers at all.)

I guess I could setup a ramdisk, and use my ram as a virtual harddrive and then copy some files there, but I don't know how to do that. And, I think it might be universally useful if rclone had some sort of automated test routine it could do, where it would spend 1-5minutes just sending garbage data to a cloud storage platform and then delete the file afterwards, just so that people can verify that they're getting a good speed and that in the case of say google drive...

It would let people test flags like --drive-chunk-size 128M
I always use google drive with the flag --drive-chunk-size 128M but I have no idea if that's still optimal, nor the skills to test other chunk sizes (I know google has very high bandwidth but throttles api calls sometimes.)

Sorry if this is impossible, out of scope, irrelevant, if so, I'll go try and figure out how to test these things myself.

There is an old feature request for a speedtest function, which I think is exactly what you would want:

Yeah, although it doesn't have to be a robust speedtest feature tons of people use, I'd take anything that could be bent into a speedtest :slight_smile: Because really I only need to run it once ever, after that I'll know if it's my hdd or my router. A bunch of extra steps and whatnot wouldn't bother me. Although I guess I do worry, even if I did my proposed ramdisk test, I'd never be 100% sure I'd tested the speed 100% correctly (doing it myself without an official feature) so I'd never be 100% sure if it's my router or hdd to blame.

I understand. Maybe you could try with --check-first and/or large --buffer-size, and also tweaking --checkers and --transfers will be relevant to control I/O...

Check first doesn't send data just api spam.

But yeah!!!! I could set buffer size to something nutty like 16gigabtes that would then only send the data from ram, right? That's actually kind of perfect!! I think, maybe (can't test it right now because I'm in the middle of the command that's going slowly and don't want to interrupt it, because 33megabytes a second is still quite fast (which is why I'm considering hdd as the potential bottleneck.)

edit: Although, does --buffer-size just tell rclone how much buffer it CAN have? or how much it MUST use?

One does exactly what I need, and one does nothing for me.

--buffer-size 8000M had no effect on performance.

The ram usage did massively spike as predicted, that means my hdd is in theory faster than my internet, so either my isp has become clogged, or my router has an issue.

right? or was that an inconclusive method of determining that?

edit: wait no, of course the ram was used, the flag told it to, I guess this test proves nothing at all, it could still be either the hdd or the internet connection :frowning:

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.