Cloudflare R2 Plex video scrubbing performance

What is the problem you are having with rclone?

I have been using a Backblaze B2 mount with Plex with no issues.
I have decided to try out Cloudflare R2 just to see how it performs.

When resuming playback in Plex, or scrubbing forward in a video, it appears as though the R2 mount begins to download the entire file from the start, until it reaches the point in the video I would like to play.

This does not happen with B2, despite using the same mount settings.

Run the command 'rclone version' and share the full output of the command.

rclone v1.59.1

  • os/version: Microsoft Windows 11 Pro 21H2 (64 bit)
  • os/kernel: 10.0.22000.795 (x86_64)
  • os/type: windows
  • os/arch: amd64
  • go/version: go1.18.5
  • go/linking: static
  • go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

Cloudflare B2

The command you were trying to run (eg rclone copy /tmp remote:tmp)

My B2 mount:

rclone mount Plex: X: --vfs-cache-mode full --no-checksum --no-modtime -v -vv --progress

My R2 mount:

rclone mount R2: X: --vfs-cache-mode full --no-checksum --no-modtime -v -vv --progress

The rclone config contents with secrets removed.

[Plex]
type = b2
account = **********
key = *******************************
hard_delete = true
 
[R2]
type = s3
provider = Cloudflare
access_key_id = *******************************
secret_access_key = *******************************
region = auto
endpoint = https://*******************************.r2.cloudflarestorage.com/

A log from the command with the -vv flag

Apologies, log was too big for pastebin, so I tried zerobin.

Before capturing this log, I mounted R2. Started a movie, scrubbed midway through, closed plex and unmounted.
Then I remounted, and clicked resume at the point I skipped to.

https://zerobin.net/?fea9ea376b1beb5b#y9ddJUaCkccXlG6meqlVoNXX2kcdXUOkPVjRg7IJFXM=

After doing this I mounted from B2 instead and clicked resume. It happily started to play right away.

Thank you for any help you can offer.

I had a quick look at the log and it is definitely reading from the start of the file.

However I can see in the log that that is what the application is requesting, so for some reason plex is reading the file from the start.

I don't know why though!

Very confusing!
I should have mentioned I've tried using the Windows Desktop Plex app, Plex in the browser and Plex for Android. All on different devices. The only common component is the desktop R2 is mounted on.
In all of the cases it reads from the start, but ONLY for my R2 mount. B2, as mentioned, works perfectly fine.

Also, if I open a video through the file explorer in VLC or MPV, it does the same thing. It will play from the start just fine, but the second I skip ahead, it has to download everything from the start.

Here are some logs showing that.
https://zerobin.net/?36c169517181bbbf#W+Pm+0Hsbib9ZSF0SZJJKC+Wr1xellbNxa8EN220g98=

B2 of course, works fine.

So I don't think its a Plex issue... It may not be an Rclone issue either, but its beyond my expertise to diagnose the problem now :smile:

Is this something to do with what is in the cache? Note that when you change the Plex: to R2: it won't share any cached items between the two.

When you say

...it has to download everything from the start.

How do you know that is happening?

Is this something to do with what is in the cache? Note that when you change the Plex: to R2: it won't share any cached items between the two.

What is in the cache I believe is just other things I've tried to watch. If I mount without VFS it continues to happen, and the phenomena still only affects R2.

How do you know that is happening?

Educated guess. Say if a movie is 2GB large. If I skip to the halfway point, and watch my data usage, it will download 1GB and then the movie will begin to play.

Just to rule out backend differences, can you try using the S3 API (https://help.backblaze.com/hc/en-us/articles/360047425453-Getting-Started-with-the-S3-Compatible-API) for Backblaze too?

Just to rule out backend differences, can you try using the S3 API (https://help.backblaze.com/hc/en-us/articles/360047425453-Getting-Started-with-the-S3-Compatible-API) for Backblaze too?

Done.
Here's the new config:

[Plex]
type = b2
account = ***************
key =  ***************
hard_delete = true

[R2]
type = s3
provider = Cloudflare
access_key_id =  ***************
secret_access_key =  ***************
region = auto
endpoint = https:// ***************.r2.cloudflarestorage.com/

[B2]
type = s3
provider = Other
access_key_id =  ***************
secret_access_key =  ***************
region = us-west-000
location_constraint = s3.us-west-000.backblazeb2.com
acl = private
endpoint = s3.us-west-000.backblazeb2.com

And the logs of running B2 with the S3 API:
https://zerobin.net/?31d49aafdbd3c7a7#wlzBEJRBtZ2fXGGZQlZpodMn6YVKSS3a3bn6sxnir8Y=

It worked just fine. Skipped halfway through the movie, and playback began almost instantly. No data usage and waiting for it to download the first half. :person_shrugging:

In an attempt to try to eliminate some variables. I installed rclone on to my Macbook Pro, which I haven't done before, to see what would happen.

- os/version: darwin 12.5 (64 bit)
- os/kernel: 21.6.0 (x86_64)
- os/type: darwin
- os/arch: amd64
- go/version: go1.18.5
- go/linking: dynamic
- go/tags: cmount

I've used the exact same config as the one in the post above.
I've observed the exact same behaviour. Both of the backblaze mounts work perfectly. The R2 mount continues to download the entire file from the start when skipping ahead.

I think it would safe to say it's not a fault of the Windows desktop, as I can replicate the issue on my Macbook, and could be narrowed down to either rclone or Cloudflare R2?

Can you also add the output of stat for the same file in both the R2 & B2 S3-type mounts?

I attempted to replicate this

I uploaded a 1GB file to both s3 and r2.

rclone test makefile 1G 1G.bin
rclone -vv -P copy --s3-upload-cutoff 2G 1G.bin s3:rclone-video-test
rclone -vv -P check --s3-upload-cutoff 2G 1G.bin s3:rclone-video-test
rclone -vv -P copy --s3-upload-cutoff 2G 1G.bin r2:rclone-video-test
rclone -vv -P check --s3-upload-cutoff 2G 1G.bin r2:rclone-video-test

I then used cmd/mount/test/seek_speed.go in the rclone source to measure the seeking speed on a mount with the same parameters as you've been using.

go run ./cmd/mount/test/seek_speed.go /mnt/tmp/1G.bin

With --vfs-cache-mode full

rclone mount s3:rclone-video-test /mnt/tmp --vfs-cache-mode full --no-checksum --no-modtime -vv
rclone mount r2:rclone-video-test /mnt/tmp --vfs-cache-mode full --no-checksum --no-modtime -vv
  • s3: That took 45.137799581s for 25 iterations, 1.805511983s per iteration
  • r2: That took 49.733133988s for 25 iterations, 1.989325359s per iteration

And with --vfs-cache-mode off

rclone mount s3:rclone-video-test /mnt/tmp --vfs-cache-mode off --no-checksum --no-modtime -vv
rclone mount r2:rclone-video-test /mnt/tmp --vfs-cache-mode off --no-checksum --no-modtime -vv
  • s3: That took 18.418467407s for 25 iterations, 736.738696ms per iteration
  • r2: That took 21.705511498s for 25 iterations, 868.220459ms per iteration

So r2 is slightly slower than s3 but there is no sign that it has to download the entire file before seeking.

I tried a different test namely seeking to the end of the file and dowloading the last bit of it using standard unix tools. This was using the --vfs-cache-mode full mount as above.

s3

$ time dd if=/mnt/tmp/1G.bin bs=1M skip=999 | pv | md5sum -
26214400 bytes (26 MB, 25 MiB) copied, 4.99217 s, 5.3 MB/s

355b2edd6d8730af90be837ae1b86f8a  -

real	0m4.994s
user	0m0.046s
sys	0m0.021s

r2

ncw@dogger:~/go/src/github.com/rclone/rclone$ time dd if=/mnt/tmp/1G.bin bs=1M skip=999 | pv | md5sum -
26214400 bytes (26 MB, 25 MiB) copied, 5.50131 s, 4.8 MB/s

355b2edd6d8730af90be837ae1b86f8a  -

real	0m5.503s
user	0m0.039s
sys	0m0.029s

So again very similar results. R2 is a bit slower but not much in it. No sign of having to download the entire file first.


@flashchaser can you think of a way I can replicate this problem without Plex?

I'll just note that I found and reported lots of bugs in R2 during the private beta period, and I'm guessing that this is probably caused by an R2 bug of some kind.

Some of the integration tests for R2 are failing at the moment. Problem areas seem to be Content-Encoding: gzip and versioning. Do you use any of these features?

I would like a more analytical way of describing this - something I can measure!

Can you also add the output of stat for the same file in both the R2 & B2 S3-type mounts?

Is this the Unix stat command? Or a flag for rclone?
Here are the outputs of the Unix command on my Mac:

Backblaze B2
stat -x "Beetlejuice (1988).mkv"
  File: "Beetlejuice (1988).mkv"
  Size: 1881720097   FileType: Regular File
  Mode: (0644/-rw-r--r--)         Uid: (  501/****)  Gid: (   20/   staff)
Device: 54,4   Inode: 9    Links: 1
Access: Wed Aug 10 22:39:44 2022
Modify: Wed Aug 10 22:39:44 2022
Change: Wed Aug 10 22:39:44 2022
 Birth: Wed Aug 10 22:39:44 2022

Cloudflare R2
stat -x "Beetlejuice (1988).mkv"
  File: "Beetlejuice (1988).mkv"
  Size: 1881720097   FileType: Regular File
  Mode: (0644/-rw-r--r--)         Uid: (  501/****)  Gid: (   20/   staff)
Device: 54,5   Inode: 9    Links: 1
Access: Wed Aug 10 22:59:27 2022
Modify: Wed Aug 10 22:59:27 2022
Change: Wed Aug 10 22:59:27 2022
 Birth: Wed Aug 10 22:59:27 2022

can you think of a way I can replicate this problem without Plex?

This happens with any media player I try, on any of my devices :slightly_frowning_face:
I can open any media file, skip halfway, and play it to the end. I can measure the data being used on my machine, the entire size of the file gets downloaded.

If I repeat that with the backblaze mount, using either backend, the data usage will be almost exactly half the size of the file.

Shall I try creating different buckets, or maybe a new R2 account, and test those?

Some of the integration tests for R2 are failing at the moment. Problem areas seem to be Content-Encoding: gzip and versioning. Do you use any of these features?

Not that I know of, but I am a layman!
All I do is mount the buckets and directly play the files in a media player.

I would like a more analytical way of describing this - something I can measure!

Is there anything specific you'd like me to do in order to make your life easier? I'm happy to collect whatever data you might need, I'm just not sure what to do exactly and I don't want to waste your time.

I had a crack at running the same Unix tools as you @ncw and came up, disappointingly, with the same result.

Backblaze:
time dd if="./Beetlejuice (1988).mkv" bs=1M skip=999 | pv | md5sum
795+1 records in[9.59MiB/s] [              <=>                                 ]
795+1 records out
834192673 bytes transferred in 122.203818 secs (6826241 bytes/sec)
 795MiB 0:02:02 [6.51MiB/s] [               <=>                                ]
25aabc526bb4b2f97dbe251f688522f3  -
dd if="./Beetlejuice (1988).mkv" bs=1M skip=999  0.01s user 3.97s system 3% cpu 2:02.21 total
pv  0.12s user 0.63s system 0% cpu 2:02.21 total
md5sum  2.93s user 0.16s system 2% cpu 2:02.21 total

Cloudflare:
time dd if="./Beetlejuice (1988).mkv" bs=1M skip=999 | pv | md5sum
795+1 records in[12.1MiB/s] [                                   <=>            ]
795+1 records out
834192673 bytes transferred in 65.929528 secs (12652793 bytes/sec)
 795MiB 0:01:05 [12.1MiB/s] [                                  <=>             ]
c39a256be9411ca1125300b465f48f2c  -
dd if="./Beetlejuice (1988).mkv" bs=1M skip=999  0.01s user 4.19s system 6% cpu 1:05.94 total
pv  0.11s user 0.61s system 1% cpu 1:05.93 total
md5sum  3.09s user 0.16s system 4% cpu 1:05.93 total

Despite those results, when I open the movie in VLC on the R2 mount, and monitor the traffic in activity monitor, the rclone process definitely downloads everything from the start, without skipping ahead.

The second I change to the backblaze mount, the rclone process' data usage will only gradually increase, and the movie plays immediately.

A quick untested idea/guess: tail largetextfile.log

Seems to do a seek:

A quick untested idea/guess: tail largetextfile.log

I tested this and can confirm it replicated the issue.

Great, now Plex is out of the equation.

I haven't read the entire thread, but seems like you now can demonstrate the issue by

  • creating a testbucket in R2 with a single large text file
  • mount it with --vfs-cache-mode off -vv --log-file=R2logfile.txt
  • perform a single tail
  • stop the mount
  • post the log

@ncw I hope you agree and can take it again from here (I have very limited time and mount experience)

I have tried to replicate this with tail and a large text file, but no luck

Here is exactly what I did

Generate text file and upload

 2010  rclone test makefile --chargen 1G 1G.txt
 2012  rclone -vv -P copy --s3-upload-cutoff 2G 1G.txt TestS3:rclone-video-test
 2013  rclone -vv -P copy --s3-upload-cutoff 2G 1G.txt TestS3R2:rclone-video-test

Mount

  516  rclone mount r3:rclone-video-test /mnt/tmp --vfs-cache-mode full --no-checksum --no-modtime -vv
  517  rclone mount r2:rclone-video-test /mnt/tmp --vfs-cache-mode full --no-checksum --no-modtime -vv

s3

$ time tail /mnt/tmp/1G.txt 
<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ !"#$
=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ !"#$%
>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ !"#$%&
?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ !"#$%&'
@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ !"#$%&'(
ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ !"#$%&'()
BCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ !"#$%&'()*
CDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ !"#$%&'()*+
DEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ !"#$%&'()*+,
EFGHIJKL
real	0m0.279s
user	0m0.001s
sys	0m0.000s

r2

$ time tail /mnt/tmp/1G.txt 
<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ !"#$
=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ !"#$%
>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ !"#$%&
?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ !"#$%&'
@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ !"#$%&'(
ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ !"#$%&'()
BCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ !"#$%&'()*
CDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ !"#$%&'()*+
DEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ !"#$%&'()*+,
EFGHIJKL
real	0m0.268s
user	0m0.001s
sys	0m0.000s

With --vfs-cache-mode off the results are

s3

$ time tail /mnt/tmp/1G.txt 
<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ !"#$
=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ !"#$%
>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ !"#$%&
?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ !"#$%&'
@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ !"#$%&'(
ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ !"#$%&'()
BCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ !"#$%&'()*
CDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ !"#$%&'()*+
DEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ !"#$%&'()*+,
EFGHIJKL
real	0m0.382s
user	0m0.001s
sys	0m0.000s
$ time tail -n 5000 /mnt/tmp/1G.txt
real	0m5.999s
user	0m0.003s
sys	0m0.003s

r2

$ time tail /mnt/tmp/1G.txt 
<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ !"#$
=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ !"#$%
>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ !"#$%&
?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ !"#$%&'
@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ !"#$%&'(
ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ !"#$%&'()
BCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ !"#$%&'()*
CDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ !"#$%&'()*+
DEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ !"#$%&'()*+,
EFGHIJKL
real	0m1.054s
user	0m0.001s
sys	0m0.000s
$ time tail -n 5000 /mnt/tmp/1G.txt
real	0m22.751s
user	0m0.000s
sys	0m0.006s

So the R2 tests with --vfs-cache-mode off are significantly slower, but I don't think it downloaded the whole file.

I repeated this test with a less compressible file but with the same results.

So I initially tested using tail with one of my existing video files.
I then used this command rclone test makefile --chargen 1G 1G.txt, repeated your steps, and could not replicate the issue...

So it seems it may be specific to the existing files in the R2 bucket?
If so, is there something I can do to check what is different between the files in the separate remotes?

-- Just tested by uploading a new video file, and it doesn't experience the issue. So it is definitely related to the existing files in the remote.

Can you run your rclone mount with

-vv --dump headers

Then do the tail that has the problem and post the log.

Hopefully we'll see in the headers what the problem is .

It sounds like cloudflare is not respecting the Range request which is odd but the headers will show whether that is the case.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.