Google Drive. How to have video playback start immediately while caching on disk?

Plex/Emby/Kodi all do something similiar to a ffprobe/mediainfo command to get the codecs and such for the file. I’m not as familiar with Kodi as I’ve never really used for more than a few minutes.

Chunked or partial reading means that is does a request for a piece of the file at a time. If it’s set too big, you can get some waste, but Plex closes the files so fast, it really is insignificant in terms of the initial library scan. File size semi matters but usually what’s in the container dictates how long it takes when it’s scanned.

I usually see anywhere from 2-10 seconds per file depending on the file.

buffer-size would help if it was bigger in terms of direct playing or a process using the file in a sequential fashion. For Plex, this means Direct Play. I personally just it at the default value for the buffer and I never have seen an issue with it.

Having a large buffer size means if a bunch of files open, you potentially can run out of memory for the system.

My general is approach is to keep it simple and leave everything at defaults unless I have a very specific reason to change it.

1 Like

Ok, did some experiments using:

mount drive: Q: --allow-other --allow-root --tpslimit 10 --fast-list --dir-cache-time 96h --vfs-read-chunk-size 128M --buffer-size 64M --timeout 1h

I have a couple of questions:

  1. Considering the API limit for Drive, is it correct to set tpslimit at 10?
  2. Is there anything I can do to speed up directory listing? I’m using fast-list but I’m not sure if it actually helps or not.
    rclone caches directory structures and keeps them for 96h, per --dir-cache-time. Is there a way to set it up so that the directory structure is cached on disk and updated only when needed (so never expiring unless something changes)?
    Also, since my main use will be through an HTPC that is placed in standby after use (thus keeping memory state intact for when it’s woken up), would there be any detrimental effect in rasing dir-cache-time to… I don’t know 960h? The machine, as mentioned previously, has 16GB of RAM, I don’t think it would be a problem in that regard. But I’m not sure if the cache would survive the standby/wakeup process.
  3. Does rclone use chunk downloading by default? Meaning: if one does not specify --vfs-read-chunk-size is rclone using chunk downloading or not? This is more a personal curiosity than anything else, really.

Thanks!

No reason to set the TPS limit for a mount.
fast-list does nothing on a mount so you can remove it.
You can keep the dir-cache-time as high as you want and it’s only kept in memory and not on disk. Polling will pick up changes and expire what’s needed as that normally happens every minute.

Yes, the defaults are chunked downloading. You can remove all that and just use the defaults.

1 Like

One extra question on this.
By reading here https://rclone.org/commands/rclone_mount/ I see that “Changes made locally in the mount may appear immediately or invalidate the cache. However, changes done on the remote will only be picked up once the cache expires.”

My use case sees me adding stuff to Drive from a different machine than the one I use to access the content loaded there (accessing it through rclone mount). Does the above mean that new content wouldn’t be “seen” by the rclone mount until the cache expires? If that is the case, I would need to keep dir-cache as low as possible, unfortunately.

No, changes are picked up via polling on Google Drive so the dir-cache time doesn’t matter.

 --poll-interval duration                 Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)

Ok. This is how i understood it before. But then what is the documentation referring to when talking about “changes done on the remote will only be picked up once the cache expires”?

I’m not questioning what you explained, just wondering if, maybe, the wording in the docs could be clearer in that passage.

And man, thank you so much. You provide an incredible job here, you reply so fast I’m… humbled, really.

It does depend on the backend as not every backend supports polling. It could be a bit clearer though to comment on that.

I’ll see if I can get some time on a pull request to add that as it makes sense.

3 Likes

I don’t think what I’m about to ask exists but still, it’s worth asking, in case I missed it somehow.

After a reboot (not standby/wake) cycle, obviously the internal (memory) dir listings cache is lost.
Is there a way to have only that on disk but no actual file caching?

When you scan a library for changes, after a reboot, there’s a strong delay because the machine needs to fire up a series of drive.files.list calls. I’m not worried about the number of calls, far lower than any limit there might be, I’m simply annoyed by the time needed for the first scan after a reboot.

Maybe this could be a valid feature request if it’s currently impossible?

That exists by using the cache backend if you want a persistent cache, but there are no plans to make the dir-cache disk only to my knowledge.

I use the remote control (–rc) features and run a command to refresh the directory listings on boot.

/usr/bin/rclone rc vfs/refresh recursive=true

takes about 30 seconds for me.

1 Like

Ok, thanks.

In my scenario I turned rclone in a Windows service. So it starts with a system account, at boot.

You’re saying that if I mounted with the --rc argument I could issue a “rclone rc vfs/refresh recursive=true” command after the mount instruction and grab the whole directory structure at boot, thus having it in RAM till the next reboot?

(unfortunately Windows still needs reboots more often than linux)

Yes, you can do that. I’m not sure how to do things in order like that on Windows though.

Ok, very good to know. Thank you.
As far as the order of instructions is concerned, I can handle that.
Thank you again.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.