Recommended Dropbox (Formally Google Drive) and Plex Mount Settings

My first guess is it was Bazaar again. I thought I had disabled the service but I found it had started back up again after a recent reboot.

I can see it was analysing my entire library again. Most likely the issue.

What version are you running? What is your mount command?

I’ve tried looking at and modifying your setup as such and trying different values but I keep hitting the 403 errors

rclone mount
–allow-other \
–uid 101000 \
–gid 101000
–umask 002 \
–dir-cache-time 48h \
–drive-chunk-size 128M
–vfs-read-chunk-size 64M \
–vfs-read-chunk-size-limit 2G \
–buffer-size 64M
–log-file /home/james/rclone/rclone.log \
–log-level INFO \
–rc
gdrive: /mounts/gdrive &

I use uid and gid because I’m mapping to lxc container.
When I used cache-dir sonarr imported to the cache/vfs but the were all *.partial files and failed although they were full size? The vfs filesystem belonged to my user though and didn’t use the same uid gid as rclone , maybe I should chown the cache directory?

Do you use a cache dir? Can’t see it in your files. I don’t use any encryption or filesystem mods

Also have a dedicated disk at /mount/storage. That is for torrents only. And files get copied from that disk to gdrive by sonarr and radarr

What 403s are you getting? Are you using your own API key?

Yes I’m using my own api keys. 403 rate limit exceeded. Think I might be best having a cache directory for sonarr to import too and then limit the upload speed so I don’t breach the 750Gb limit bit where would plex and sonar look? For a complete database of my collection?. Although it’s a new setup so I may be best to just sit it out . Won’t be Doing 750 every day.

What version are you using?

Only setup a few days ago so the newest I think 1.45

I don’t write directly to my mount as I use a local disk and mergerfs and move stuff via a rclone upload script.

If you are trying to write to it, you need to make some changes as you want to turn vfs-cache-mode writes on. I get mixed results with that as that’s why I don’t use it.

But you copy over via a script at set intervals. Is this so much different from vfs and copy over when keep time expires? Just can’t get my head around how sonarr plex would see all media in 2 places. Do i point at both gdrive and vfs locations or does rclone handle this?

Also I had sonarr importing to vfs just had permission issues

It’s written up on my github as the flow and how it works.

mergerfs is like unionfs and combines a local disk and my GD for a single mount so every application see the same path regardless if it’s local or cloud.

Yeah I will have to play with it later. You can’t see anything wrong with my rclone launch params then? Might just have to cope until my library is sorted then won’t be hitting anywhere near 750 a day.

Running v1.45, Here is my config on the machine that runs Bazarr

/usr/bin/rclone mount edrive: /home/media/.media/rclone --allow-other --vfs-cache-max-age 1h --vfs-cache-mode off --vfs-cache-poll-interval 1m --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off --dir-cache-time 5m

Bazaar had a bug on it.

Yeah I just turned the bloody thing off!

You can see drive.get drops as soon as I turned it off

Do you know what the issue number is?

I do not. I’d check their discord or github.

Bleh. I’m back now. My personal laptop died, SSD and memory had to go. Now that I’m back, I can actually write that tutorial.

I’ve also got the new system rolling in this weekend.
ntel i7-6700K
4c/8t - 4GHz /4.2GHz
32GB DDR4 2133 MHz
SoftRaid 2x480GB SSD
1x4TB SATA
250 Mbps bandwidth
500GB of OS / Config backups.

I’m working on a script to download all the current airing series to the local machine, and after a full month of storing them, they’ll get pushed to the cloud.

i.e. if it was downloaded today on 1/1/19, on 2/2/19 it would be pushed to the cloud.

Software:
Plex
Sonarr
Radarr
Lidarr
Bazarr
Glances + InfluxDB + Grafana
Deluge + Addon: AutoRemovePlus
Tautulli
Jackett

I’m going to offload as many processes as I can to get the resource consumption down.

Where can i found these statistics?

It’s in the Google Console if you are using your own API key.

https://console.cloud.google.com/apis/api/drive.googleapis.com should get you there.

2 Likes

Hi !

I use the exact same settings as you @Animosity022 . Thanks for your great work, and for sharing it on github. It is a very useful ressource.

Today I have been banned at 7 p.m . I am not sure exactly why but i have the infamous 403 error:

2019/01/15 19:01:04 ERROR : Films/Asterix and Cleopatra (1968)/AstĂ©rix.et.ClĂ©opĂątre.1968.WEBDL-1080p.Radarr.tt0062687.[FR].mkv: ReadFileHandle.Read error: couldn’t reopen file with offset and limit: open file failed: googleapi: Error 403: The download quota for this file has been exceeded., downloadQuotaExceeded

Despite the above message, the error is not this file only, but all the filesystem. I guess i can now consider myself as banned for 24 hours :frowning: .

In my metrics, i can spot precisely the moment i have been banned:

None of the queries quotas seemed to have been triggered:


My query rate is about half of the limit per user, and it has been closed to 1000 only since i have banned because of the multiples retries 
 ironic !
Any idea of the cause of this ban ? What may have triggered it ?

I will deactivate the disk scan of radarr and sonarr preventively.