Recommended Dropbox (Formally Google Drive) and Plex Mount Settings

[Unit]

Description=Mount and cache Google drive to /mnt/plexstorage
After=syslog.target local-fs.target network.target
[Service]
Environment=RCLONEHOME=/home/redacted/.config/rclone
Environment=MOUNTTO=/mnt/plexstorage
Environment=LOGS=/home/redacted/logs
Environment=UPLOADS=/home/redacted/uploads
Type=simple
User=root
ExecStartPre=/bin/mkdir -p ${MOUNTTO}
ExecStartPre=/bin/mkdir -p ${LOGS}
ExecStartPre=/bin/mkdir -p ${UPLOADS}
ExecStart=/usr/bin/rclone mount
–rc
–log-file ${LOGS}/rclone.log
–log-level INFO
–umask 022
–allow-non-empty
–allow-other
–fuse-flag sync_read
–tpslimit 10
–tpslimit-burst 10
–dir-cache-time=160h
–buffer-size=64M
–attr-timeout=1s
–vfs-read-chunk-size=2M
–vfs-read-chunk-size-limit=2G
–vfs-cache-max-age=5m
–vfs-cache-mode=writes
–cache-dir ${UPLOADS}
–config ${RCLONEHOME}/rclone.conf
gcache: ${MOUNTTO}
ExecStop=/bin/fusermount -u -z ${MOUNTTO}
ExecStop=/bin/rmdir ${MOUNTTO}
Restart=always
[Install]
WantedBy=multi-user.target

Found an SSH application on my phone, used it to pull the config.

The 2M read-chunk-size is awful. A good starting point would be my config at the top of the post as that’s what I use.

fuse-flag does nothing unless you compile rclone yourself.

are you using your own API key?

What’s your rclone.conf look like with the passwords and key removed?

[GSuite]
type = drive
client_id =
client_secret =
scope = drive
root_folder_id =
service_account_file =
token = {“access_token”:“redacted”,“token_type”:“Bearer”,“refresh_token”:"redacted

[gcache]
type = cache
remote = GSuite:
plex_url = http://redacted:32400/web/index.html
plex_username = redacted
plex_password = redacted
chunk_size = 5M
info_age = 48h
chunk_total_size = 10G
plex_token = redacted

I don’t use cache at all so as I mentioned, it would be a good starting point to use my settings since you are asking in my settings post :slight_smile:

I just found your stuff last night by accident, haha. I was having an issue with rClone and I found a post you made about go1.10 or something like that having an error with drive labels. Soon as I get back to my hotel, I’ll look into reconfiguring plex. I might hold off until I get my new dedicated server.

@Animosity022 here I am again… Everything was working good and fast. And since two days I’m getting bannned on the download quota. Ever had that before? I start a movie in the morning and later in the day I hit it. Uploading still works, but downloading doesn’t cause I’m banned.

My mount:
rclone mount --rc --allow-other --buffer-size 1G --dir-cache-time 72h --drive-chunk-size 256M --fast-list --log-level INFO --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off gdrive_crypt: /mnt/user/mount_rclone/Gdrive &

2018/12/08 07:37:39 ERROR : Films/Troy (2004)/Troy (2004).mkv: ReadFileHandle.Read error: low level retry 1/10: read tcp 10.0.0.60:57068->172.217.17.74:443: i/o timeout
2018/12/08 09:10:19 ERROR : Films/Troy (2004)/Troy (2004).mkv: ReadFileHandle.Read error: low level retry 1/10: read tcp 10.0.0.60:47412->172.217.168.202:443: i/o timeout
2018/12/08 10:58:32 ERROR : Films/Murder on the Orient Express (2017)/Murder on the Orient Express (2017).mkv: ReadFileHandle.Read error: low level retry 1/10: couldn't reopen file with offset and limit: open file failed: googleapi: Error 403: The download quota for this file has been exceeded., downloadQuotaExceeded

And if you run rclone version, what does it show? That only happens if it’s an older version.

Are you linking files for others to download? If you are, that could be your problem. There’s no download cap when using rClone, at least not one I’ve encountered. I just pulled 35TB of data off Google G Suite.

There is a 10 TB daily download cap on GDrive (unless they have increased it recently). Doesn’t matter whether you use Rclone or something else.

1 Like

I only use the Gdrive for streaming and backing up. I don’t get 10TB of traffic, so I think it has to do with files being opened over and over again or something.

Version is the latest beta.

I’m currently trying to create a new API and have that working but I keep getting locked out because of the bans.

Unless you have a need for a beta, best to stick with the deployed version as I’m using 1.4.5.

felix@gemini:~$ rclone version
rclone v1.45
- os/arch: linux/amd64
- go version: go1.11.2

There are also a lot of betas each day so unless you share the rclone version, it’s not helpful.

Thanks a lot again.

Just switched to the stable version 1.45 (was on beta 1.45.031). Also created a new API and rebuild my whole rclone config. Currently it is working again. Got my DEBUG log on, so hopefully I can catch it if it goes wrong again.

A few questions about your choices of flags again:

  • buffer-size -> why only 256M? It seems the common choice is 1G.
  • drive-chunk-size -> why only 32M? I’ve read NCW say that the bigger the better.
  • you only have the -rc flag. @BinsonBuzz uses this command aswell: rclone rc --timeout=1h vfs/refresh recursive=true. Now -rc is a bit of a black box for me. So I think the --timeout flag waits the directory caching for 1hr. But I don’t exactly know what the other flags do and why you don’t use those.

buffer-size is more hardware dependent on your workload and how much extra memory you have available. It is used per file so if you have 100 files open with a buffer-size of 1G, you consume 100G of memory. For most things, you can leave it at the default as it really depends on your use case. I recently tested with 16M and noticed no issues/challenges. I bumped it up to 256M to try as I have a lot of spare memory on my system so for me, it’s resources not being used that can help. The big thing to think about is waste though. If you have players that constantly open and close, having a large buffer would cause a lot of waste as on a file close, the buffer is dumped.

drive-chunk-size is only used when uploading. I never upload on my mount so this setting is moot for me. Great article here on testing various chunk sizes -> https://www.chpc.utah.edu/documentation/software/rclone.php

rc just runs another server that listens for commands to execute. You can do quite a lot with the rc commands, but not secured or exposed, they can give too much access. I run with rc and use the refresh as well on boot as that’s a little faster to ‘prime’ up the cache. This isn’t needed and I only use it as I’m impatient and want the first scan to be quick. Normally, the scan would build the cache anyway .

1 Like

Thanks a lot for explaining, that makes sense!

So my gdrive mount was cached and in the last 3 hours the new mount and API is live I’ve hit about 40.000 API hits. Is this normal for a first time API usage or after mounting?

“Normal” would depend on a lot of things. How many streams? How many items need to be analyzed still? How big is your library?

I’m at a pretty steady state at this point with 50TB in the cloud and maybe 10-15 streams per day at most.

image

I get about 30k to 40k hits per day.

Thanks for the reference. I’m at about 30TB and had only 2-3 streams. Will see what tomorrow brings after it settles down and if things are still working.

The bans started after I updated my rclone to the latest beta. So maybe something has changed which impacts this.

I had created a rclone cache mount. May I know how it is actually taking effect when the same local cached file is reopen to access, or any keyword I can locate from the log file?

I don’t use a cache mount in my setup at all.

You’d need to turn on the log level to info and pick a location for the log file. You’d see that in the log file.

It would look like:

--log-level DEBUG --log-file /home/felix/logs/rclone.log

So that rclone.log file would show me when files are opened and closed. DEBUG is a lot of logging though but would answer your question.

Yesterday went to 150k API hits and today already on 50k+ hits and still rising quickly.

I’ve checked my logs (which are huge) and I see the following things come up a lot:

2018/12/11 17:03:34 DEBUG : : >Statfs: stat={Blocks:274877906944 Bfree:266234356181 Bavail:274877906944 Files:1000000000 Ffree:1000000000 Bsize:4096 Namelen:255 Frsize:4096}, err=<nil>

I also see my whole library passing by in the logs and giving these logs (sometimes more elaborate)
2018/12/11 17:02:02 DEBUG : &{Films/The Cabin (2018)/The Cabin (2018).mkv ®}: Flush:
2018/12/11 17:02:02 DEBUG : &{Films/The Cabin (2018)/The Cabin (2018).mkv ®}: >Flush: err=
2018/12/11 17:02:02 DEBUG : &{Films/The Cabin (2018)/The Cabin (2018).mkv ®}: Flush:
2018/12/11 17:02:02 DEBUG : &{Films/The Cabin (2018)/The Cabin (2018).mkv ®}: >Flush: err=
2018/12/11 17:02:02 DEBUG : &{Films/The Cabin (2018)/The Cabin (2018).mkv ®}: Release:
2018/12/11 17:02:02 DEBUG : Films/The Cabin (2018)/The Cabin (2018).mkv: ReadFileHandle.Release closing
2018/12/11 17:02:02 DEBUG : /: Lookup: name=".unionfs"
2018/12/11 17:02:02 DEBUG : /: >Lookup: node=.unionfs/, err=
2018/12/11 17:02:02 DEBUG : .unionfs/: Attr:
2018/12/11 17:02:02 DEBUG : .unionfs/: >Attr: attr=valid=1s ino=0 size=0 mode=drwxrwxrwx, err=
2018/12/11 17:02:02 DEBUG : /: Lookup: name=".unionfs"
2018/12/11 17:02:02 DEBUG : /: >Lookup: node=.unionfs/, err=
2018/12/11 17:02:02 DEBUG : .unionfs/: Attr:
2018/12/11 17:02:02 DEBUG : .unionfs/: >Attr: attr=valid=1s ino=0 size=0 mode=drwxrwxrwx, err=
2018/12/11 17:02:02 DEBUG : .unionfs/: Lookup: name=“Films_HIDDEN~”
2018/12/11 17:02:02 DEBUG : .unionfs/: >Lookup: node=, err=no such file or directory
2018/12/11 17:02:02 DEBUG : &{Films/The Cabin (2018)/The Cabin (2018).mkv ®}: >Release: err=
2018/12/11 17:02:02 DEBUG : .unionfs/: Lookup: name=“Films”
2018/12/11 17:02:02 DEBUG : .unionfs/: >Lookup: node=, err=no such file or directory
2018/12/11 17:02:02 DEBUG : /: Lookup: name=“Films”
2018/12/11 17:02:02 DEBUG : /: >Lookup: node=Films/, err=
2018/12/11 17:02:02 DEBUG : Films/: Attr:
2018/12/11 17:02:02 DEBUG : Films/: >Attr: attr=valid=1s ino=0 size=0 mode=drwxrwxrwx, err=
2018/12/11 17:02:02 DEBUG : /: Lookup: name=“Films”
2018/12/11 17:02:02 DEBUG : /: >Lookup: node=Films/, err=
2018/12/11 17:02:02 DEBUG : Films/: Attr:
2018/12/11 17:02:02 DEBUG : Films/: >Attr: attr=valid=1s ino=0 size=0 mode=drwxrwxrwx, err=
2018/12/11 17:02:02 DEBUG : Films/: Lookup: name=“The Cabin (2018)”
2018/12/11 17:02:02 DEBUG : Films/: >Lookup: node=Films/The Cabin (2018)/, err=
2018/12/11 17:02:02 DEBUG : Films/The Cabin (2018)/: Attr:
2018/12/11 17:02:02 DEBUG : Films/The Cabin (2018)/: >Attr: attr=valid=1s ino=0 size=0 mode=drwxrwxrwx, err=
2018/12/11 17:02:02 DEBUG : Films/The Cabin (2018)/: ReadDirAll:
2018/12/11 17:02:02 DEBUG : Films/The Cabin (2018)/: >ReadDirAll: item=2, err=
2018/12/11 17:02:02 DEBUG : .unionfs/: Lookup: name=“Films”
2018/12/11 17:02:02 DEBUG : .unionfs/: >Lookup: node=, err=no such file or directory
2018/12/11 17:02:02 DEBUG : : Statfs:

I’m running Emby only at the moment and it should have already scanned my whole library, so I don’t understand why all these API hits are happening.
Any clue? I can share with you my whole log, but I would like to do that private, since it contains all my files.

You can PM me a link if you’d like to.

If you change paths in Emby, like Plex, it will rescan the file.

If you scan without files there, it’ll need to rescan as well.

I don’t know how to check if something is scanned in Emby. I use plex-library-stats to figure out if I have anything that needs to be analyzed.

felix@gemini:~$ plex-library-stats.sh
11.12.2018 11:15:34 PLEX LIBRARY STATS
Media items in Libraries
Library = Movies
  Items = 1998

Library = TV Shows
  Items = 20608

Library = xMMA
  Items = 56

Library = zExercise
  Items = 279

Time to watch
Library = Movies
Minutes = 213996
  Hours = 3566
   Days = 148

Library = TV Shows
Minutes = 830064
  Hours = 13834
   Days = 576

Library = xMMA
Minutes = 9774
  Hours = 162
   Days = 6

Library = zExercise
Minutes = 11278
  Hours = 187
   Days = 7

23166 files in library
0 files missing analyzation info
0 media_parts marked as deleted
0 metadata_items marked as deleted
0 directories marked as deleted
22943 files missing deep analyzation info.

That’s here -> https://github.com/ajkis/scripts/blob/master/plex/plex-library-stats.sh