Mount google drive for streaming, only RAM caching

Hi

I'd like to use the mounted google drive account for streaming from it, for example DLNA video streaming.

I have available RAM (2-3 gbytes) for caching, but i want to use only that, so no disk cache. But the files on google drive are sometimes more than 13 gbytes, so it isn't fits in RAM.

Is it possible to get a mount kinda random access but fast, so make it work like a disk? I have 1 gbit download, and google servers are very fast too, so rclone is the bottleneck.

Any tips?

Assuming you are not using the cache-backend, rclone doesn't put anything on disk when it's streaming.

You can control the amount of memory used per file by using --buffer-size 128M or something along those lines.

buffer-size is used when files are read sequentially though and if closed, the buffer is dumped.

the problem with buffer size, that it's * files open. Last day i saw the even if --transfer 4 set, about 16 files were open, so 16*512 mbytes of ram eaten...

A mount does not have a concept of transfers but really just open files. You had take a bit of memory for the directory/file cache but that's pretty minimal.

Yea I agree, you probably just want to add larger buffers in your case.

I mean, theoretically if you really were dead set on running a cache on RAM you could always use third-party software to run a ram disk and set cache on that - but as you say, that wouldn't really be very useful with such a limited size. I find cache is probably best run on a HDD where you don't really need to worry about write endurance and you can set it to be large, thus speeding up access to frequently accessed files.

The reason a cloud drive isn't as snappy and responsive as a local drive no matter your bandwidth is more due to inherent limitations in the API, as well as the latency involved. For example on Gdrive you can transfer a file about as fast as you want, but 1000 tiny files will never move fast since you aren't allowed to create or change files more frequently than about 2/sec. Rclone can't fix that - only find ways to potentially work around it (such as maybe smartly bundling small files together, which may be something that gets added not too far down the road).

The best thing is probably to understand what exactly the limitations are - and which you can improve and which you have to just work around.

For example - with large bandwidth on a Gdrive you can very greatly improve upload speeds by using larger upload chunks than the 8MB default. Similarly, avoiding small chunks on download (either because of using cache backend or chunked downloading in the VFS) may be a good move for someone like you with a lot of bandwidth.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.