Rclone Mount slow read speed

how are you using the mount?
for editing word documents or what?

if you are just reading small parts of a large file, then using speed tests of downloading a large file, does not seem too useful.

That is exactly what it does:
It reads small parts of big files.
Lets say its like the following: A movie file of 4 hours, my tool says "read minute 10 to 11, 145 to 146".

That is what it does. I guess one could compare it to your jellyfin mount, except that it doesnt have to read the entire mediafile in one go.

you seem to be comparing downloading of large files to small random access reads of large files mounted over the internet
so what is the problem then?

Ok lets say I compare potatos with apples.

Is 6-8 MB/s reading on my pattern, which is small portions of big files, normal speed for rclone mount?
If no, my problem is, how to speed that up in rclone.

If Yes, then Rclone might be the wrong tool?

i guess i am not understanding.

if that is for downloading entire files, then that is very slow.

if that is for random access reads, how are you measuring that?

This is the speed I get reading parts of big files over rclone mount.

The 6-8 MB/s is the speed the tool reads it in rclone mount (and therefor I assume it is how fast rclone mount downloads it to be read).

I can show you the screen: Rclone mount:
grafik
ut reads 1.02 GB of my file at 9 MB/s

Local:
grafik
It reads only 174MB, but that is since the file it reads locally is more tiny.

It is random access reads on big files.

so you are basing all this on that tool that someone wrote for you?

what is actually reading the files, the tool or some application on your computer?

The tool is reading parts of the files at random.

If I download the same files that is read - I get 70 MB/s locally and 9 on rclone mount.

I know that cloud space is never as fast as locally. But I know it reads much slower on mount.
My question is, if this can be optimized in rclone or not.

I can only say, over the mount its much slower, although the download capacity (speed wise) is there. I can only proof that by making a normal download with rclone copy.

I dont know what normal random read speeds over rclone mount are - if you say 9 MB/s is normal then its fine. If not, then there is a problem. That is all I want to know.

Btw: The tool is a application on my computer - it is exe that was compiled for exactly this use: Read parts of files. I cant say if the speeds shown are real, I can only say local is almost 10 times faster than mount.

where can i download this tool, as i would like to test with it?

you could try raidrive and compare.
if you do so, make sure to post here, as i would really like to know.

As I told you - it was coded for myself and I paid money for it. So I cant share it (this is part of the contract we aggreed on).

I already tried with stablebit, and went to rclone since stablebit was even slower.

I really appreciate your help here - Thank you.

well, try raidrive, and then we will know for sure what you can expect.
post the results.

I already have stablebit, netdrive and rclone running. The point why I sue rclone was due to cryption. Can raidrive crypt the data?

If not - I wont even try. It would take me around 1 week to reupload the data to try, due to the 750gb limit

There's really no magic to make a random read become faster as it has to open the file, read the file, close the file. That happening many times per second is going to be slow due to the latency from going local to the cloud and back. You can't do anything to speed up latency.

Rclone gets around some of this by being efficient in those things.

I don't you'll find a tool to make this go away.

Thank you Animosity022

So it is limited by Rclone / by the cloud itself. I thought I could get faster speed for those operations.
I was not sure if it was possible since it seemed rather slow - but can be totaly possible.

So I can stop to try to optimize what isnt possible.

raidrive install a file system filter driver on the windows computer.
perhaps it might offer better performance for your use case.

Nicholas, perhaps you could tell us what you are optimizing for exactly. You already said you're dealing with large movie files, but is this for playback or something else? I just don't see the point in trying to troubleshoot something that doesn't appear to be a problem.

The movie part was to explain it. It aint movie files that are read. It is simply a search that happend for a specific data string, within those files. The part read is totaly random. I simply used the minutes as a way to explain it.

What I want to achieve is that specific parts of the files that are requested and read are downloaded faster.
The closest case so far I found was a kind of hard disc mining named Burst. But it is also not exacty that.

How big are the chunks you are downloading? It takes any transfer a few MB to get up to speed, so if you are just transferring relatively small chunks then it will run slower I think.

Yeah the chunks are only a few MB. So maybe its better to set the chunks bigger then?