I am not planning on using rclone to upload stuff to Drive. I only need it to access my media through Kodi or DirectShow based players.
Through Google Drive File Stream a file gets cached locally but playback starts in a matter of a couple of seconds (gigabit connection). What I would like to get through rclone is playback starting but downloading to cache to proceed uninterrupted until the file has completely been cached.
I’ve had mixed luck in getting this through GDFS (good on some machines, bad on others), hence my looking into rclone, that appears to be much more configurable.
Whatever setting should keep in mind that Kodi has library functionality that scans media directory for new content and such (less intensive than Plex, from what I can tell from other friends’ experience).
Thanks again.
The cache backend can cache the file to disk while your stream starts at the expense of a few second nore delay of the start time in my experience. Personally, I just use vfs cache writes and i use a webdav/http to serve and even on a remote vps things start in a few seconds in Kodi or any player.
First of all, thanks to the both of you for your replies.
I’m a total newbie here, so be patient while I try to wrap my head around the concepts you explain.
When you speak about the cache backend, is this what you are referring to? https://rclone.org/cache/
Considering I have a 1000Gbps download connection, what would be “ok” settings? I don’t need to cache a lot before playback starts, it can start as soon as it’s technically possible, as download speeds far exceeds the bitrate of whatever I can be watching.
The idea is to minimize download events, as those have brought me 24 hours bans in the past (there’s no way to have Google state clearly how they count multiple download events for the same video file being watched with direct connection streaming).
Considering that there’s library scans going on (tipically once a day), I need to be sure that this “cache backend” doesn’t attempt to download whole files while they’re being accessed for simple scanning reasons.
The machine in use has 16GB of RAM and a 512GB free SSD for caching purposes.
Edit: I see that “Cache” has several options specific to Plex, which I don’t use and I have no intentions of using. I hope it can be useful for my case just the same.
The regular VFS backend and Cache backend both do chunked downloading so that was a very legacy reason why standard rclone caused a ban as it made google think you downloaded the file many times with Plex as an example. That was fixed mid 2018 or sometime around there so assuming you grab the latest version (1.47 as of today), you’ll have no issues.
Neither VFS nor Cache do anything on their own as it’s all Plex or Emby or whatever that dictates how much of a file comes down.
Plex is the one I’m the most familiar with and that gets a few pieces to analyze the file when it’s first added and from that point, it’s just size/modtime checks which download nothing.
I use an encrypted GD backend for my media and a very simple mount:
/usr/bin/rclone mount gcrypt: /GD --allow-other --dir-cache-time 96h --drive-chunk-size 32M --log-level INFO --log-file /opt/rclone/logs/rclone.log --timeout 1h --umask 002 --rc
rc can be removed that’s just used if you want to interact via remote control commands.
umask is for setting permissions if you want user and group access along with some other access.
timeout is for pausing and resuming playback as that helps keep a connection open for an hour before closing is out.
dir-cache-time is high because that’s how it keeps the directory/file structure in memory and the longer the better as any changes are picked up every minute via polling.
Double clicked on a video file, it opened in MPC-HC (pretty standard DirectShow player) and in C:\Users\ashlar\AppData\Local\rclone\cache-backend\cache-gdfs\ I saw several 250MB files being downloaded… but playback was not starting. I stopped it at 11 files being created, for a total size of 2.62GB (whole file is 13.1GB). 11 download events were registered in the admin console.
Do you have any clue as to why playback isn’t starting? Files on Drive are not encrypted or anything. They play normally when accessed through Google Drive File Stream.
Problem is that every chunk creates a download event, which is what I’m trying to minimize. Or are you saying it doesn’t matter if these are generated through rclone?
In any case, I tried playing back a different movie with Kodi and, again, playback was not starting. I stopped it at 2.69 GB having already been saved in cache (12 files, having the offset size in bytes, such as 0, 262144000, etc.).
I wonder if there’s anything “special” that needs to be configured for this to work normally in Windows.
I’m using this to mount:
rclone mount --allow-other --timeout 1h --cache-db-path E:\rCloneCache cache-gdfs: Q:
Edit: I tried opening the “0” file with MPC-HC and it opened perfectly for the .mkv file that in reality it’s a chunk of. There must be some sort of “disconnect” at play here… :-/
The issue before was that each time a file was read, it would count as a complete download of that file.
So if you had a 10GB file and it got read 10 times, it counted as 100GB for the download quota for that file. The ban/issue/error was that files were exceeding download quota.
If you can push the API quota per day, which is 1 billion, good luck
I push barely 20k API hits per day with 60Tb of data and 4-5 people streaming.
If you want to use cache, you need a small chunk size or it’s going to be tough.
You also need to make and use your own client ID/API key:
Thanks once more, I cannot tell you how much I appreciate the time and patience. I’m sadly aware of my ignorance here so… really thanks!
So you’re telling me that I could safely ignore the number of download events and mount without cache or vfs configured? Like… you’re 100% sure that the number of download events displayed at https://admin.google.com/AdminHome?pli=1&fral=1#Reports:subtab=drive-audit don’t matter and the only thing that matters is the whole file size being transferred?
I’m not afraid of the number of daily API hits, for the reasons you express I’ll never, ever risk reaching 1 billion hits. But download quota for a single file… 20GB file, hit hundreds of time… it’s easy to reach 10TB downloaded watching a couple of movies, if download events are counted every time as full size.
Edit: I say without vfs because the only reason I thought I needed vfs was to minimize the number of download events. I could use a memory buffer if that’s not important (I have 16GB after all). Or is vfs useful in my scenario?
Also, yes, I created my cliend ID/API key. I read instructions as well as I could before beginning.
rclone only does chunked downloading so you don’t hit an issue of download quota if for example you run mediainfo on a 50GB movie 1,000,000 times.
I use my mount command I shared above on a encrypted GD without any issues. I am a Plex user though so that’s my use case. A linux server running plex with sonarr/radarr/etc.
So “chunked downloading” means that the API call specifies the amount of data that is being transferred for every download event? Is that what is happening and is this why one doesn’t risk surpassing the download quota?
Without using the cache backend but just vfs playback starts. I’ll see if any windows user has any more suggestions and in the meantime, thank you Animosity022.
If you hit play, does it start right away or do you see it download the entire file? Regardless of the logs, it should be pretty easy to figure that out.
You can see some key words in those logs like ‘offset’ and “actual_length”.
You are trying to solve something that simply just isn’t a problem.
There are plenty of API calls to make during a single day as you can’t really go over the limit as Google only allows ~10 per second with their default quotas.
The term download ‘event’ does not really apply and just adds to confuse other folks as it makes an issue that doesn’t exist.
You have a few options in rclone.
Standard - you can stream a file using just memory which is more the default settings
Two cache mode options
use a vfs cache mode and keep a file on disk for a period. this requires getting a whole file
‘cache backend’ - this does chunked downloading and retains parts of a file. bigger chunk storage size keeps files around longer.
24 hour bans don't happen for 'no good reason' as there is something that always causes that.
I can speak to Drive File Stream because I don't use it.
If you played a file using a version of rclone prior to June 2018 and didn't use the cache backend, that would cause the issue with rclone in particular.
There are many ways to get the old package by not downloading the latest version from the site. For example, Ubuntu has versions in their PPA that are ancient:
If you have Drive File Stream questions, I'm sure they can help out with those particular things. We can happily answer any rclone questions here especially me with Plex as that's my use case.
I've got ~60TB of encrypted data and switched over to rclone back in June 2018 once the new release hit. So I'm approaching almost 10 months now and never have seen a 24 hour ban nor any major API hits.
Thanks. No more GDFS talking, I agree. I was providing context but it became overly long.
I recover a couple of questions from the post in the old discussion that I (wrongly) resurrected.
Would it make sense to have a --vfs-read-chunk-size very low for initial library scanning purposes and then raise it once the library has been scanned (as subsequent additions, from day to day, would be a fraction of the initial scan)? Or is there a shared “optimum” value for scanning purposes?
I use Kodi, not Plex but the scanning process is quite similar. I think Kodi uses a call through the ffmpeg code it uses internally to scan for media info. In that it should be similar to Plex, I guess.
Also, as my experimentation has shown me, using vfs read chunks has very “little” video available, with 64MB of buffer to counter “network hiccups”, especially in high bitrate scenarios and I might want to consider increasing the memory buffer to something like 256 or 512MB. Any cons that I should bear in mind, were I to do this?