Rclone cache + Plex/Radarr/Sonarr. Basic questions


#1

Hi friends!

I’m struggling to the get the grip of the file/folder flow utilising rclone cache in my media server. I have some basic questions just to understand the fundamentals of the flow:

  • I’ve mounted a unencrypted rclone cache remote (gcache) to /media/. Gcache is pointing to my google drive mount (gdrive). In the /media folder I can see my movies and tv folders as expected.
  • I’ve setup Plex library to look in the /media mount. Works as expected.

Now the big questions. What do I do with radarr/sonarr? Can they do file/folder renaming and moves directly to the cached /media mount? Or do I have to use the unionfs local/cloud trick still? Cache mounts are RW right?

  • If I make a script that moves local files from /media to gdrive every once in a while, will rclone manage to differ between a cached entry and a local file?

Sorry for my stupidness. It’s really hard to find good information regarding rclone cache as it’s still quite new. And I’m the kind of guy that needs guides to learn :wink:

Thanks!


#2

I am using the exact same tools as yours.

Following is my configuration (working flawlessly for 3+ weeks):

  • GCache (my rclone cache) mounted at ~/GMedia (This is encrypted via rclone crypt but shouldn’t matter)
  • Plex Libraries pointed at subfolders within ~/GMedia
  • Sonarr & Radarr point to the same subfolders as those of the Plex Libraries
  • Plex is setup as a Connect client in Sonarr & Radarr so that it is notified whenever a show is added/upgraded/deleted.
  • You shouldn’t need to setup any script to move from the cache to GDrive, since it does it automatically.

Let me know if you have any further questions.


#3

Thanks for your kind and quick reply :slight_smile:

So if I understand you correctly:
All local files that are moved to my mounted /media/TV or Movies folders are automatically uploaded to gdrive and will therefore not use any space on the local drive?


#4

Yeah, default is to move immediately. This can be tuned with the --cache-tmp-upload-path and --cache-tmp-wait-time. Basically --cache-tmp-upload-path stores the added files on your local drive for an amount of time specified by --cache-tmp-wait-time, after which it is uploaded to GDrive. I have set this time to 60 minutes in my config to ensure that even large files finish copying and also to account for immediate repacks etc. You can set this according to your server configuration.


#5

I’ve totally been over complicating how this works. Guess not coming from a plexdrive and unionfs setup, and then reading about the whole bunch, just made me mix it all togheter.

Again: thanks for your time and help :+1:


#6

No problem. Glad I could help.


#7

It’s not you. I still deploy rclone cache as a read only and deploy unionfs on top to transfer due to our own movement scripts for better handling and speeds. Plus I don’t end up with a bottle neck cache. Depends on your use.

The main role that deploys it:
https://github.com/Admin9705/PlexGuide.com-The-Awesome-Plex-Server/blob/Version-5/ansible/roles/cache/tasks/main.yml (go back and check template folder for that portion)

This is the guide I built for it:

Hopefully this become super solid in the end!


#8

I’m the mindset with more for less.

Having it read-only reduces the ability to easily upgrade and delete files. Unionfs adds complexity along with another script.

By using just rclone cache (not sure what the bottleneck you are talking about) and the cache-tmp-upload, it simplifies things quite a bit and allows all the software that was mentioned to work as-is without doing anything else.


#9

Mind sharing the settings for your cache mount as we more or less use the same setup?

I’m on a 300/300 fiber line, an i7 250gb SSD ubuntu server box in my basement, have 1.5 tb of data in the cloud and share it with 4-5 friends.


#10

Sure:

rclone.conf

[GD]
type = drive
client_id = clientId
client_secret = secret
token = {"access_token”:”token”,”token_type":"Bearer","refresh_token”:”refresh token,”expiry":"2018-05-08T07:21:59.015731846-04:00"}

[gcache]
type = cache
remote = GD:media
chunk_total_size = 32G
plex_url = http://192.168.1.30:32400
plex_username =email@gmail.com
plex_password = password
plex_token = token

[gmedia]
type = crypt
remote = gcache:
filename_encryption = standard
password = password
password2 = password
directory_name_encryption = true

my systemd startup:

[Unit]
Description=RClone Service
After=network-online.target
Wants=network-online.target

[Service]
Type=simple
ExecStart=/usr/bin/rclone mount gmedia: /gmedia \
   --allow-other \
   --dir-cache-time=160h \
   --cache-chunk-size=10M \
   --cache-info-age=168h \
   --cache-workers=5 \
   --cache-tmp-upload-path /data/rclone_upload \
   --cache-tmp-wait-time 60m \
   --buffer-size 0M \
   --syslog \
   --umask 002 \
   --rc \
   --log-level INFO
ExecStop=/usr/bin/sudo /usr/bin/fusermount -uz /gmedia
Restart=on-abort
User=felix
Group=felix

[Install]
WantedBy=default.target

I point everything to my decrypted /gmedia/TV or /gmedia/Movies mount and store stuff locally for 60 minutes before uploading. I personally also use plex_autoscan because that helps me with replacing media if I upgrade as it handles emptying the trash and has a failsafe to not empty if more the ‘x’ items in the trash.


#11

@Fjesnes You can use the config provided by @Animosity022. It is the base of my config too. There are a few tweaks in relation to the wait-time & workers but that will have to be tuned anyway for your system.

I run both sonarr & radarr on the same machine as plex, so I don’t use plex_autoscan to handle the update and instead just let sonarr & radarr notify plex as needed. I also do not have any script for emptying the trash and do it manually when needed. This allows me to maintain two versions of a specific movie if needed. It shouldn’t matter much anyway because GSuite has unlimited storage.


#12

I’m running all the media server applications on one system as well. Thanks for the help, guys. I really appreciate it.


#13

Any comments?

[Unit]
Description=Mount and cache Google drive to /media/Plex
After=syslog.target local-fs.target network.target

[Service]
Type=simple
User=root
ExecStartPre=/bin/mkdir -p /media/Plex
ExecStart=/usr/bin/rclone mount gcache: /media/Plex \
   --config /home/plex/.config/rclone/rclone.conf \
   --allow-other \
   --dir-cache-time=48h\
   --cache-chunk-size=10M \
   --cache-info-age=48h \
   --cache-workers=5 \
   --cache-tmp-upload-path /data/rclone_upload \
   --cache-tmp-wait-time 60m \
   --buffer-size 0M \
   --syslog \
   --log-level INFO
ExecStop=/bin/fusermount -u -z /media/Plex
ExecStop=/bin/rmdir /media/Plex
Restart=always

[Install]
WantedBy=multi-user.target

Any way to see what is stored locally and what is stored in the cloud? Other than lookng into the Gdrive itself?


#14

Easily? Not really. You can see the files locally but they are encrypted so you have to translate them back.

You can see in the logs when it uploads and moves stuff:

May  8 06:44:29 gemini rclone[3092]: smu5ej34ujbdoip1cm3mlk92q4/q42c44qbkijmuh0ts7efctb39rrqjpp4knu806tmvpst0lvqajo0/gld67r8m7tg17as5lhpk5u7doh58l33g0dff18mb1fl0vlr0mg1c5bu0ap884575ena4e88uh7kva: background upload: started upload
May  8 06:46:00 gemini rclone[3092]: smu5ej34ujbdoip1cm3mlk92q4/q42c44qbkijmuh0ts7efctb39rrqjpp4knu806tmvpst0lvqajo0/gld67r8m7tg17as5lhpk5u7doh58l33g0dff18mb1fl0vlr0mg1c5bu0ap884575ena4e88uh7kva: Copied (new)
May  8 06:46:00 gemini rclone[3092]: smu5ej34ujbdoip1cm3mlk92q4/q42c44qbkijmuh0ts7efctb39rrqjpp4knu806tmvpst0lvqajo0/gld67r8m7tg17as5lhpk5u7doh58l33g0dff18mb1fl0vlr0mg1c5bu0ap884575ena4e88uh7kva: Deleted
May  8 06:46:00 gemini rclone[3092]: smu5ej34ujbdoip1cm3mlk92q4/q42c44qbkijmuh0ts7efctb39rrqjpp4knu806tmvpst0lvqajo0/gld67r8m7tg17as5lhpk5u7doh58l33g0dff18mb1fl0vlr0mg1c5bu0ap884575ena4e88uh7kva: background upload: uploaded entry
May  8 06:46:00 gemini rclone[3092]: smu5ej34ujbdoip1cm3mlk92q4/q42c44qbkijmuh0ts7efctb39rrqjpp4knu806tmvpst0lvqajo0/gld67r8m7tg17as5lhpk5u7doh58l33g0dff18mb1fl0vlr0mg1c5bu0ap884575ena4e88uh7kva: finished background upload

I honestly just check every so often to see if any files are there and it doesn’t clean up the directories yet, so there is some garbage laying around but only a few MB.


#15

The next noobish question: Where to I find the log? :slight_smile:


#16

If you are using --syslog, it’ll be in /var/log/syslog, but I might recommend to remove the --syslog and just use the --log-file and put in the location you want.


#17

Any command to “ignore” the --cache-tmp-wait-time setting and just upload straight away?


#18

Simply removing the --cache-tmp-upload-path & --cache-tmp-wait-time parameters should work.


#19

I decided to change the --cache-tmp-upload-path from /data/gcache to /opt/gcache. I just changed it in the mount command in rclone.service and rebooted. Now I see a lot of errors like this in the log:

2018/05/10 12:00:57 ERROR : TV/The SHOWNAME/Season 1/SHOWNAME- S01E10 - Night Bluray-720p.mkv: error refreshing object in : in cache fs Google drive root 'Media': object not found

Any ideas?


#20

Try restarting the service with the option --cache-db-purge.