How to mount to stream and keep downloading?

I believe this question is easy it’s just my understand of the mechanics of it that is poor.

Example, if you mount gdrive, stream a big ISO and use vfs system, is there a way to stream on the fly the portion you need and while stopped keep downloading the rest of the ISO?

Play ISO: request X portion, gets 100mb + the next 100mb, but if you pause the movie, it keeps downloading the next 100mb and the next… BUT if you skip forward it will request portion Y? and keeps going from there? Y+100, Y+100+100…

And this will get written to disk, correct?

And would this work with an application with a bunch of small files? e.g. request exe, request dll.
Starts reading images.pak but still have .exe .dll from previews access still present on disc to stop repeated access to Gdrive and speed up streaming?

Not quite the way rclone works.

The default VFS backend doesn’t store anything on disk as it’s all in memory.

You can play a file and based on the chunk size, it requests that ‘chunk’ for the file and grabs that. Based on your buffer size, it would read ahead to fill that buffer. The buffer is dumped though if a file is closed. If you seek ahead in something, it makes a new seek request and you get that part of the file and the process continues assuming it’s reading sequentially playing forward.

It’s mulit threaded so reading many files at a time happens without a problem depending on your settings/available bandwidth/memory/etc.

There is the cache backend which stores ‘chunks’ of files on disk based on the config.

I personally stream large 4k movies and don’t see much issue with reading ahead or moving around. The only caveat to that would Plex transcoding a 4k movie down for someone at 1080p as that’s more a plex/4K movie issue as it’s an expensive transcode than anything related to rclone.

Reading the documentation I get confused because of this portions:

Directory Cache

Using the --dir-cache-time flag, you can set how long a directory should be considered up to date and not refreshed from the backend.

And this

--cache-dir string                   Directory rclone will use for caching
``

This has nothing to do with vfs?

dir-cache-time is related to how long the directory / file structure is kept in memory. So if you do a ls for example, it will keep that in memory until the time expires or polling event detects a change. For example in my mount, I keep a very large cache time as it’s less API hits in general.

By default, VFS backend does no caching of anything. There are different cache types, but not really used for streaming as it somewhat defeats the purpose if you have to download an entire file to stream it. That’s all written up here:

https://rclone.org/commands/rclone_mount/#file-caching

Using vfs-cache-mode full would keep a disk based copy of a file but it downloads an entire file before it can stream so if I wanted to play a 45GB 4K movie, it would download the entire file first before playing it, which equates to all the disk space and the delay in downloading the file so not really a solid use case for streaming.

Yes I’m basing my setup on yours.

“, I keep a very large cache time as it’s less API hits in general.”
This means you do cache on disk, isn’t it?
Or this cache time you mentioned is based on RAM?

And if I want to use this:

https://rclone.org/cache/
Following this setup will end up with a 10GB file with all the information from the cloud.

Questions:

  1. During the setup process there’s no way to specify the path where this 10GB file. To do this I do it while mounting?
    Will this be in the same place where the mount occurs?
    Rclone mount remote:cache /mnt/cache ?

  2. Do I use all the other settings I’m using with rclone mount remote:secure like chunk size buffer size etc but I just change remote:secure with remote:cache?

And when accessing the files cache it will stream?
That’s the way it will work?

As I wrote just above, it’s stored in memory.

1 Like

For #1, https://rclone.org/cache/#cache-chunk-path is what to configure to point the chunk location to a specific directory and override the default location.

For #2, I’m not sure what your setup is. Are you using encryption ? If so, this is the order to set it up:

https://rclone.org/cache/#cache-and-crypt

So it’s recommended to use cloud > cache > crypt.

I have no idea how to setup like this. And yes all my files are in crypt in gdrive.

So my example would be, I have a folder in my root of my GD called “media”.
Everything under that folder is encrypted.
Cache points to the drive
Encrypted points to the folder in the cache.

[GD]
type = drive
client_id = 
client_secret = 
token = {"access_token":"","token_type":"Bearer","refresh_token":"","expiry":"2019-03-29T11:15:51.576279107-04:00"}



[gcache]
type = cache
remote = GD:media
chunk_size = 32M
info_age = 5d
chunk_total_size = 50G

[gmedia]
type = crypt
remote = gcache:
info_age = 72h
filename_encryption = standard
password = 
password2 = 
directory_name_encryption = true

Going to try this.

  1. In your remote you express 50GB, can you point the place of those database files directly within the config file?

  2. And what should I mount? The Cache or the Crypt?

  1. That’s the https://rclone.org/cache/#cache-chunk-path as noted a few posts above.
  2. You’d mount the crypt as you want to access the decrypted files.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.