My first guess is it was Bazaar again. I thought I had disabled the service but I found it had started back up again after a recent reboot.
I can see it was analysing my entire library again. Most likely the issue.
My first guess is it was Bazaar again. I thought I had disabled the service but I found it had started back up again after a recent reboot.
I can see it was analysing my entire library again. Most likely the issue.
What version are you running? What is your mount command?
Iâve tried looking at and modifying your setup as such and trying different values but I keep hitting the 403 errors
rclone mount
âallow-other \
âuid 101000 \
âgid 101000
âumask 002 \
âdir-cache-time 48h \
âdrive-chunk-size 128M
âvfs-read-chunk-size 64M \
âvfs-read-chunk-size-limit 2G \
âbuffer-size 64M
âlog-file /home/james/rclone/rclone.log \
âlog-level INFO \
ârc
gdrive: /mounts/gdrive &
I use uid and gid because Iâm mapping to lxc container.
When I used cache-dir sonarr imported to the cache/vfs but the were all *.partial files and failed although they were full size? The vfs filesystem belonged to my user though and didnât use the same uid gid as rclone , maybe I should chown the cache directory?
Do you use a cache dir? Canât see it in your files. I donât use any encryption or filesystem mods
Also have a dedicated disk at /mount/storage. That is for torrents only. And files get copied from that disk to gdrive by sonarr and radarr
What 403s are you getting? Are you using your own API key?
Yes Iâm using my own api keys. 403 rate limit exceeded. Think I might be best having a cache directory for sonarr to import too and then limit the upload speed so I donât breach the 750Gb limit bit where would plex and sonar look? For a complete database of my collection?. Although itâs a new setup so I may be best to just sit it out . Wonât be Doing 750 every day.
What version are you using?
Only setup a few days ago so the newest I think 1.45
I donât write directly to my mount as I use a local disk and mergerfs and move stuff via a rclone upload script.
If you are trying to write to it, you need to make some changes as you want to turn vfs-cache-mode writes on. I get mixed results with that as thatâs why I donât use it.
But you copy over via a script at set intervals. Is this so much different from vfs and copy over when keep time expires? Just canât get my head around how sonarr plex would see all media in 2 places. Do i point at both gdrive and vfs locations or does rclone handle this?
Also I had sonarr importing to vfs just had permission issues
Itâs written up on my github as the flow and how it works.
mergerfs is like unionfs and combines a local disk and my GD for a single mount so every application see the same path regardless if itâs local or cloud.
Yeah I will have to play with it later. You canât see anything wrong with my rclone launch params then? Might just have to cope until my library is sorted then wonât be hitting anywhere near 750 a day.
Running v1.45, Here is my config on the machine that runs Bazarr
/usr/bin/rclone mount edrive: /home/media/.media/rclone --allow-other --vfs-cache-max-age 1h --vfs-cache-mode off --vfs-cache-poll-interval 1m --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off --dir-cache-time 5m
Bazaar had a bug on it.
Do you know what the issue number is?
I do not. Iâd check their discord or github.
Bleh. Iâm back now. My personal laptop died, SSD and memory had to go. Now that Iâm back, I can actually write that tutorial.
Iâve also got the new system rolling in this weekend.
ntel i7-6700K
4c/8t - 4GHz /4.2GHz
32GB DDR4 2133 MHz
SoftRaid 2x480GB SSD
1x4TB SATA
250 Mbps bandwidth
500GB of OS / Config backups.
Iâm working on a script to download all the current airing series to the local machine, and after a full month of storing them, theyâll get pushed to the cloud.
i.e. if it was downloaded today on 1/1/19, on 2/2/19 it would be pushed to the cloud.
Software:
Plex
Sonarr
Radarr
Lidarr
Bazarr
Glances + InfluxDB + Grafana
Deluge + Addon: AutoRemovePlus
Tautulli
Jackett
Iâm going to offload as many processes as I can to get the resource consumption down.
Where can i found these statistics?
Itâs in the Google Console if you are using your own API key.
https://console.cloud.google.com/apis/api/drive.googleapis.com should get you there.
Hi !
I use the exact same settings as you @Animosity022 . Thanks for your great work, and for sharing it on github. It is a very useful ressource.
Today I have been banned at 7 p.m . I am not sure exactly why but i have the infamous 403 error:
2019/01/15 19:01:04 ERROR : Films/Asterix and Cleopatra (1968)/Astérix.et.Cléopùtre.1968.WEBDL-1080p.Radarr.tt0062687.[FR].mkv: ReadFileHandle.Read error: couldn't reopen file with offset and limit: open file failed: googleapi: Error 403: The download quota for this file has been exceeded., downloadQuotaExceeded
Despite the above message, the error is not this file only, but all the filesystem. I guess i can now consider myself as banned for 24 hours .
In my metrics, i can spot precisely the moment i have been banned:
None of the queries quotas seemed to have been triggered:
My query rate is about half of the limit per user, and it has been closed to 1000 only since i have banned because of the multiples retries ... ironic !
Any idea of the cause of this ban ? What may have triggered it ?
I will deactivate the disk scan of radarr and sonarr preventively.