Rclone Mount on Windows behaving very odd when playing back media

STOP and READ USE THIS TEMPLATE NO EXCEPTIONS - By not using this, you waste your time, our time and really hate puppies. Please remove these two lines and that will confirm you have read them.

What is the problem you are having with rclone?

I'm facing an intermittent issue with my rclone mount and Plex server, and I'm seeking some assistance in troubleshooting and resolving it. Here's a description of the problem:

Issue: Sometimes, when attempting to play a file on Plex, it starts playing immediately without any problems. However, other times, the same file keeps loading indefinitely, or Plex displays an error with the code "s1003" related to network connectivity. I have obviously tried other files and it's the same behaviour.

Additional Information:

  • Previously, I had three mounts set up, but I have confirmed that there is now only one mount configured.
  • The problem occurs sporadically, making it difficult to identify the exact cause.

I would greatly appreciate any insights or suggestions you may have regarding this issue. Please let me know if you need any further details to assist in troubleshooting.

Thank you in advance for your help!

Run the command 'rclone version' and share the full output of the command.

rclone v1.62.2

  • os/version: Microsoft Windows 11 Pro 21H2 (64 bit)
  • os/kernel: 10.0.22000.1817 Build 22000.1817.1817 (x86_64)
  • os/type: windows
  • os/arch: amd64
  • go/version: go1.20.2
  • go/linking: static
  • go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

mount --dir-cache-time 72h --poll-interval 15s --buffer-size 128M --drive-chunk-size 128M --transfers=15 --checkers=50 --log-file=C:\rclone\rclonemount.txt --log-level DEBUG --vfs-read-chunk-size 128M --vfs-read-ahead 128M --vfs-fast-fingerprint --vfs-cache-poll-interval 30s --vfs-cache-max-size 50G --vfs-cache-max-age 12h --vfs-read-chunk-size-limit off --tpslimit 8 CloudDrive: H: --config "C:\Users\AppData\Roaming\rclone\rclone.conf" --vfs-cache-mode full

The rclone config contents with secrets removed.

[Workspace]
type = drive
client_id = 
client_secret = 
scope = drive
token = {"access_token":"ya29.a0AWY7CkmgbTOI4lCmM3CqOkUO5ETLrONOaX-Ly2w9BT-8YX7IkzTn0_-NufSucNg5pHtjlMpDG2TW0S1EvRdJinrlcEhWMNzRW9Nnj0MWxZeK-isbhKLFiM7v8fXQMXqZGARMXnm9wjK_pvmIoUbdw6lq-hHtuUTwdQaCgYKAWQSARASFQG1tDrpNzfm68ahNcHZadlI_Qm0bA0169","token_type":"Bearer","refresh_token":"1//0g-u3h_RKtrE_CgYIARAAGBASNwF-L9IrwDUof7U1G-bpFo722Hbx4vwGQBwd5jTPgMR-Sc3qzELYzcq-LHwL_fp6HCAkdGE55dY","expiry":"2023-05-30T12:00:13.0783469+12:00"}
team_drive = 
server_side_across_configs = true

[CloudDrive]
type = crypt
remote = Workspace:/crypt
password = 
server_side_across_configs = true

A log from the command with the -vv flag

https://pastebin.pl/view/ddbe3f03

That looks like about 70 seconds of a log.

When did the error happen?

My bad, I deleted the log and started fresh but the error didn't pop at that time. Let me capture it when error pops up.

Here's the log on Mega ( I tried pasting the whole debug on patebin but it kept on crashing for me)

I didn't exactly get an error at first but the I tried playing Aviator movie at 5.59ish PM and the file took while to start playing.
Whereas other files started playing in 10 - 15 seconds.

Few minutes later, I tried playing Air on my PC whilst playing Black Mirror on my phone.
I got the error on my PC when playing Air whereas Black Mirror was working time.
Time stamp for this error is around 6.09PM.

From the rclone side, that looks like a clean log. No errors, retries or anything odd going on.

Most the playback is coming from the cache as well so not even reading from Drive.

2023/05/30 18:10:13 DEBUG : vfs cache: looking for range={Pos:1035931286 Size:32768} in [{Pos:0 Size:1170206720} {Pos:2739044352 Size:164622336} {Pos:3514966016 Size:31820}] - present true
2023/05/30 18:10:13 DEBUG : vfs cache: looking for range={Pos:1035964054 Size:32768} in [{Pos:0 Size:1170206720} {Pos:2739044352 Size:164622336} {Pos:3514966016 Size:31820}] - present true
2023/05/30 18:10:13 DEBUG : vfs cache: looking for range={Pos:1035996822 Size:131845} in [{Pos:0 Size:1171255296} {Pos:2739044352 Size:164622336} {Pos:3514966016 Size:31820}] - present true
2023/05/30 18:10:13 DEBUG : vfs cache: looking for range={Pos:1036128667 Size:32768} in [{Pos:0 Size:1171255296} {Pos:2739044352 Size:164622336} {Pos:3514966016 Size:31820}] - present true
2023/05/30 18:10:13 DEBUG : vfs cache: looking for range={Pos:1036161435 Size:32768} in [{Pos:0 Size:1171255296} {Pos:2739044352 Size:164622336} {Pos:3514966016 Size:31820}] - present true
2023/05/30 18:10:13 DEBUG : vfs cache: looking for range={Pos:1036194203 Size:32768} in [{Pos:0 Size:1171255296} {Pos:2739044352 Size:164622336} {Pos:3514966016 Size:31820}] - present true
2023/05/30 18:10:13 DEBUG : vfs cache: looking for range={Pos:1036226971 Size:96316} in [{Pos:0 Size:1171255296} {Pos:2739044352 Size:164622336} {Pos:3514966016 Size:31820}] - present true
2023/05/30 18:10:13 DEBUG : vfs cache: looking for range={Pos:1036323287 Size:32768} in [{Pos:0 Size:1171255296} {Pos:2739044352 Size:164622336} {Pos:3514966016 Size:31820}] - present true
2023/05/30 18:10:13 DEBUG : vfs cache: looking for range={Pos:1036356055 Size:32768} in [{Pos:0 Size:1171255296} {Pos:2739044352 Size:164622336} {Pos:3514966016 Size:31820}] - present true
2023/05/30 18:10:13 DEBUG : vfs cache: looking for range={Pos:1036388823 Size:107609} in [{Pos:0 Size:1171255296} {Pos:2739044352 Size:164622336} {Pos:3514966016 Size:31820}] - present true
2023/05/30 18:10:13 DEBUG : vfs cache: looking for range={Pos:1036496432 Size:32768} in [{Pos:0 Size:1171255296} {Pos:2739044352 Size:164622336} {Pos:3514966016 Size:31820}] - present true
2023/05/30 18:10:13 DEBUG : vfs cache: looking for range={Pos:1036529200 Size:32768} in [{Pos:0 Size:1171255296} {Pos:2739044352 Size:164622336} {Pos:3514966016 Size:31820}] - present true
2023/05/30 18:10:13 DEBUG : vfs cache: looking for range={Pos:1036561968 Size:73221} in [{Pos:0 Size:1171255296} {Pos:2739044352 Size:164622336} {Pos:3514966016 Size:31820}] - present true
2023/05/30 18:10:13 DEBUG : vfs cache: looking for range={Pos:1036635189 Size:32768} in [{Pos:0 Size:1171255296} {Pos:2739044352 Size:164622336} {Pos:3514966016 Size:31820}] - present true
2023/05/30 18:10:13 DEBUG : vfs cache: looking for range={Pos:1036667957 Size:32768} in [{Pos:0 Size:1171255296} {Pos:2739044352 Size:164622336} {Pos:3514966016 Size:31820}] - present true
2023/05/30 18:10:13 DEBUG : vfs cache: looking for range={Pos:1036700725 Size:143042} in [{Pos:0 Size:1171255296} {Pos:2739044352 Size:164622336} {Pos:3514966016 Size:31820}] - present true

That would point me away from anything rclone and something local.

If you are transcoding, bad wifi or a plex server that can't keep up. Something along those lines.

That seems most likely as well but I thought I would ask just to confirm! Thanks for responsding!

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.