Rclone Mount slow read speed

i guess i am not understanding.

if that is for downloading entire files, then that is very slow.

if that is for random access reads, how are you measuring that?

This is the speed I get reading parts of big files over rclone mount.

The 6-8 MB/s is the speed the tool reads it in rclone mount (and therefor I assume it is how fast rclone mount downloads it to be read).

I can show you the screen: Rclone mount:
grafik
ut reads 1.02 GB of my file at 9 MB/s

Local:
grafik
It reads only 174MB, but that is since the file it reads locally is more tiny.

It is random access reads on big files.

so you are basing all this on that tool that someone wrote for you?

what is actually reading the files, the tool or some application on your computer?

The tool is reading parts of the files at random.

If I download the same files that is read - I get 70 MB/s locally and 9 on rclone mount.

I know that cloud space is never as fast as locally. But I know it reads much slower on mount.
My question is, if this can be optimized in rclone or not.

I can only say, over the mount its much slower, although the download capacity (speed wise) is there. I can only proof that by making a normal download with rclone copy.

I dont know what normal random read speeds over rclone mount are - if you say 9 MB/s is normal then its fine. If not, then there is a problem. That is all I want to know.

Btw: The tool is a application on my computer - it is exe that was compiled for exactly this use: Read parts of files. I cant say if the speeds shown are real, I can only say local is almost 10 times faster than mount.

where can i download this tool, as i would like to test with it?

you could try raidrive and compare.
if you do so, make sure to post here, as i would really like to know.

As I told you - it was coded for myself and I paid money for it. So I cant share it (this is part of the contract we aggreed on).

I already tried with stablebit, and went to rclone since stablebit was even slower.

I really appreciate your help here - Thank you.

well, try raidrive, and then we will know for sure what you can expect.
post the results.

I already have stablebit, netdrive and rclone running. The point why I sue rclone was due to cryption. Can raidrive crypt the data?

If not - I wont even try. It would take me around 1 week to reupload the data to try, due to the 750gb limit

There's really no magic to make a random read become faster as it has to open the file, read the file, close the file. That happening many times per second is going to be slow due to the latency from going local to the cloud and back. You can't do anything to speed up latency.

Rclone gets around some of this by being efficient in those things.

I don't you'll find a tool to make this go away.

Thank you Animosity022

So it is limited by Rclone / by the cloud itself. I thought I could get faster speed for those operations.
I was not sure if it was possible since it seemed rather slow - but can be totaly possible.

So I can stop to try to optimize what isnt possible.

raidrive install a file system filter driver on the windows computer.
perhaps it might offer better performance for your use case.

Nicholas, perhaps you could tell us what you are optimizing for exactly. You already said you're dealing with large movie files, but is this for playback or something else? I just don't see the point in trying to troubleshoot something that doesn't appear to be a problem.

The movie part was to explain it. It aint movie files that are read. It is simply a search that happend for a specific data string, within those files. The part read is totaly random. I simply used the minutes as a way to explain it.

What I want to achieve is that specific parts of the files that are requested and read are downloaded faster.
The closest case so far I found was a kind of hard disc mining named Burst. But it is also not exacty that.

How big are the chunks you are downloading? It takes any transfer a few MB to get up to speed, so if you are just transferring relatively small chunks then it will run slower I think.

Yeah the chunks are only a few MB. So maybe its better to set the chunks bigger then?

Yes bigger chunks will make everything more efficient I think. Though don't download data you don't need. It will still take the same time to get that first MB though.

I have similar problem as Nicolas_Girard. I'm trying to optimize reading speed of file from google drive to Foobar 2000.

I have: Rclone Mount to mount the google drive, My own client ID so this shouldn't be the case of slow loading

I'm reading around 200 MB file of flac music. Flac files often contains whole album of music from author. Imagine you have CD on which you have one big file with timestamps of every song and you just rewind the position.

At this point, things starts to get interesting. When i load whole album via CUE file (this file contains that timestamps of every song) and want to play one song, it takes much more time to read (or cache the file into my pc) over when i want to read whole flac file.

The "flac file" method have stable time around 3 seconds to read and start playing the music, and the "CUE file" method have unstable, from 8 seconds to 1,5 minutes read time.

The solution can be to divine individual songs into individual files so one file is one song instead one file is whole album.

Let me know if you have better way to solve this without dividing the file e.g. via config or start options, or anything else.

What is your rclone version (output from rclone version)

rclone v1.53.3

  • os/arch: windows/amd64
  • go version: go1.15.5

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Windows 10 Pro (version 1909)

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone mount rclone: Y:
(Also tried a few configs like:

rclone mount rclone: Y: --buffer-size 1G --dir-cache-time 96h --log-level INFO --config C:\Users[username].config\rclone\rclone.conf --timeout 1h --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit 2G --vfs-cache-mode writes

(the [username] is for security reasons here on forum)

Your mount is very not optimized for large with your setup. You'd want to not use a buffer size that big. You really want to look at vfs-cache-mode writes as your setup is not the same use case as you are dealing with opening and closing of small files rather than running through large video files.

Best to remove everything you aren't sure what it does, use defaults and use vfs-cache-mode full and read up on that as it would easily meet your use case.

1 Like

Thanx! That "--vfs-cache-mode full" fixed my problem! It's amazing that you need only one startup setting to run it smoothly! In my use case ofc.

1 Like