Drive File Stream uses Dokany, with Google contributing to the main codebase (https://github.com/google/google-drive-dokany).
It has by default a cache mode, where files being accessed are temporarily copied online before being accessed. What pointed me to rclone direction is that I’ve experienced very different behavior on two machines I’m using.
One starts playback and starts caching, with caching continuing while playback is ongoing and stopping only when 100% of the file has been cached (with gigabit connection it means that the file is safely cached in a matter of minutes while watching a movie). This generates a really small amount of download events in the admin console (API calls I guess, although there’s no way to check API calls for GDFS, as that’s an internal Google app). If playback is stopped before caching is complete, caching stops immediately.
The other machines acts more similarly to what I’m seeing with rclone, caching a small portion of the file being played back and resuming the caching process once the player asks for “more chunks” to playback. Start, stop, start, stop. This generates many more download events.
Since for my use case I’d rather use more transfer quota than API calls, I would have loved to get the first behavior on the second machine (which is my main playback machine) but, alas, there appears to be no way to make it happen and no way to configure GDFS to make it happen. Even though space available on disk and RAM is far in excess of what’s been used (32GB and 16GB respectively). To this day the difference remains a mistery.
Sorry for the offtopic, I just wanted to provide context. From what I’ve seen so far through experimentation, there’s no way to get the first behavior to happen through rclone too.