Recommended Dropbox (Formally Google Drive) and Plex Mount Settings

Fair enough, will do

@Animosity022

Is there any advantage in not using --vfs-read-ahead?

vfs-read-ahead really only comes into play when sequentially reading a file for a period of time without it being closed.

Not sure it really matters much for streaming with cache mode full.

So basically there will be no noticable difference when streaming?

and another question:
can performance be tweaked by changing ```
--vfs-read-chunk-size

or do you think it will not be noticable with the standard chunk size?

vfs-read-chunk-size is the HTTP range request it does for requesting data.

No reason to change it as if you make it too small, it does too many API hits, if you make it too big, it gets too much data that it doesn't need.

The default is a good balance between the two.

I've just got a nasty email from Google which looks like they are going to boot me.

I've started the process of moving to Dropbox but I'm struggling with creating a client_ID and client_secret, and then authorising rclone. I think I've created the ID and the secret, but I'm getting "invalid_redirect_url" when I try to authorise rclone - is there a guide I can follow somewhere please?

Start a new post and use the help and support template.

@jopojp @Animosity022

Any news on the --tpslimit? I found initially that --tpslimit 12 was the limit, but recently I'm getting rate limited at that point too currently set at --tpslimit 8. Dropbox for Teams โ€“ three active users. Own app.

500 seems extremely high compared to this.

If you stay at --tpslimit 12 but add --tpslimit-burst 0 , you're still rate limited ?

I'm currently renaming a bunch of files by changing the encryption encoding to

rclone move dropbox-crypt: dropbox-crypt2: --tpslimit 12 --tpslimit-burst 0 --server-side-across-configs --log-level DEBUG --log-file /rclone/log/rclone-cryptmove-dropbox2.log --progress

2023/06/03 19:04:15 DEBUG : pacer: Reducing sleep to 50.047996ms
2023/06/03 19:04:16 DEBUG : pacer: Reducing sleep to 37.535997ms
2023/06/03 19:04:16 DEBUG : pacer: Reducing sleep to 28.151997ms
2023/06/03 19:04:16 DEBUG : pacer: low level retry 3/10 (error too_many_write_operations/.)
2023/06/03 19:04:16 DEBUG : pacer: Rate limited, increasing sleep to 56.303994ms
2023/06/03 19:04:16 DEBUG : pacer: Reducing sleep to 42.227995ms
2023/06/03 19:04:16 DEBUG : pacer: Reducing sleep to 31.670996ms
2023/06/03 19:04:16 INFO  : <filename>: Moved (server-side)
2023/06/03 19:04:16 DEBUG : pacer: Reducing sleep to 23.753247ms
2023/06/03 19:04:16 DEBUG : pacer: Reducing sleep to 17.814935ms
2023/06/03 19:04:16 DEBUG : pacer: Reducing sleep to 13.361201ms
2023/06/03 19:04:17 DEBUG : pacer: Reducing sleep to 10.0209ms
2023/06/03 19:04:17 DEBUG : pacer: low level retry 4/10 (error too_many_write_operations/)
2023/06/03 19:04:17 DEBUG : pacer: Rate limited, increasing sleep to 20.0418ms
2023/06/03 19:04:17 DEBUG : pacer: low level retry 1/10 (error too_many_write_operations/)
2023/06/03 19:04:17 DEBUG : pacer: Rate limited, increasing sleep to 40.0836ms
2023/06/03 19:04:18 DEBUG : pacer: low level retry 5/10 (error too_many_write_operations/..)
2023/06/03 19:04:18 DEBUG : pacer: Rate limited, increasing sleep to 80.1672ms

Config:

[dropbox]
type = dropbox
client_id = XXX
client_secret = XXX
token = {"access_token":"XXXXX"}

[dropbox-crypt]
type = crypt
remote = dropbox:
filename_encryption = standard
directory_name_encryption = true
password = <redacted>
password2 = <redacted>

[dropbox-crypt2]
type = crypt
remote = dropbox:
filename_encryption = standard
filename_encoding = base32768
directory_name_encryption = true
password = <redacted>
password2 = <redacted>

So yes. Any tips for improvement welcome.

Are you using the same app/api for dropbox-crypt: and dropbox-crypt2: ? If so, you should create an api for each (the tps limit is per api).

Make a new post and use the help and support template.

Ok, did not mean to hijack this. It was more about the outrageous --tpslimit 500 โ€“ and if there was any more details around that.

I just moved my data from Google to Dropbox as well. I had a look to homescripts/rclone-movies.service at master ยท animosity22/homescripts ยท GitHub

But my server only has 1TB and the amount of space available varies between 30gb and 200gb because my server is also used to download from nzbget and qbittorrent.
Therefore I suppose that I need to use a lower value where it says --vfs-cache-max-size.

What would happen if the cache gets full? Will the mount get unmounted?

Another question I have is how much case will be used if I get 15 concurrent users playing 15 different movies which file size is 10gb each?

Based on the poll interval, it reduces the size based on whatever you have configured.

No.

Depends on your settings. It uses everything locally cached and if the size gets filled, it dumps older things first.

2 Likes

But does it save to cache the whole file or just a portion of it?

That's all documented here:

1 Like

Ok, so I modified Animosity022 mount slightly to fit my needs:

  • I'm not using the remote control parameters becaused I don't need them.
  • I'm not using fuse3 because I have several mounts that I setup years ago with plexguide, and I'm afraid that if I install fuse3 lot of stuff would break.
  • I decreased --vfs-cache-max-size to 150G because I never have more free space than that on my server.

[Unit]
Description=Dropbox Movies Daemoon
Wants=network-online.target
After=multi-user.target

[Service]
Type=notify
Environment=RCLONE_CONFIG=/opt/appdata/plexguide/rclone.conf
RestartSec=5
ExecStart=/usr/bin/rclone mount dbmovies: /mnt/dropbox
--log-file=/var/plexguide/logs/rclone-dbmovies.log
--allow-other
--dir-cache-time 9999h
--log-level INFO
--uid=1000 --gid=1000
--umask=002
--cache-dir=/cache/Movies
--vfs-cache-mode full
--vfs-cache-max-size 150G
--vfs-fast-fingerprint
--vfs-write-back 1h
--vfs-cache-max-age 9999h
--tpslimit 12
--tpslimit-burst 0
ExecStop=/bin/fusermount -uz /mnt/dropbox > /dev/null
Restart=on-failure
User=tito
Group=tito

[Install]
WantedBy=multi-user.target

Now the problem:

When I woke up I didn't had any space left on my server. /cache/Movies was using around 160GB despite I set the limit to 150GB. I found there some big files that I played more than 12h ago, so I'd like the cache to delete files that weren't accessed during the last 6h.
Would it be fixed if I decrease --vfs-cache-max-age to 6h? Is that the only change I'd need to perform?

Would be awesome if you made a new post and didn't hijack my post with different things.

Thanks.

Apologies for that. When I created my own thread (not exactly the same message but related) I was told to use the search feature and someone pointed me to your GitHub script.
Hence, I thought maybe I should post in this thread just in case someone told me to ask for help here.

I'll restrict myself from posting again n the forum and I'll test myself possible solutions.