Recommended Dropbox (Formally Google Drive) and Plex Mount Settings

My setup doesn’t use a docker so I wouldn’t be sure where to start looking.

I’d start a new post and you can share your setup, your mount command and some logs of what you are seeing at debug.

Hello, I had set this up one time before but I am having no luck.

I was able to complete the rclone config and connect my google drive.

Now I would like to make my google drive or in this case my rclone remote visable as a drive or share to applications installed on my local box.

Do you have a document that can point this out. This is a fresh install so I can start all over.

The latest error is:
My remote is called gdrive:

when I type rclone ls gdrive:

I would get

" Failed to ls: googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded"

[root@wtf02 ~]# rclone mount gdrive: /upload
2019/04/17 23:57:41 mount helper error: fusermount: fuse device not found, try ‘modprobe fuse’ first
2019/04/17 23:57:41 Fatal error: failed to mount FUSE fs: fusermount: exit status 1

Upload is a local directory at /upload.

I have tried running modprobe fuse nothing happens.

I’d start a new thread as you are not using my settings. Happy to help but this thread is more related to any questions related to my settings.

ok, sounds good.

I now have a server which is running centos 7.

My questions how would you suggest me to proceed?

I have 4 different folders already on the server (empty)
TVShows
Movies
TVShows (Foreign Language)
Movies (Foreign Language)

Should I create one mount and use symlinks ? Or should I make one mount for each folder (so 4 in total) ?

Because I never used CentOS before (I was on QNAP NAS before which handled a lot of things very … unique). Thats why I have an additional question can I mount it via a service file like yours: gmedia-rclone.service or do I need to mount it differently ?

It’s really up to you as it doesn’t matter if you have a single mount, 4 mounts, 4 folders in a single mount. It’s how you’d want to manage it. I use a single mount.

CentOS is systemd so you can use the same file for the most part assuming you change your users and use the proper mount point.

1 Like

Thank you for the quick response. It is really helping me alot.
Is there anything special I need to be aware of if I want to put multiple mounts in that service file ?

You can only run one mount per service file so you’d have multiple service files.

1 Like

I am having an issue when I try to start the service file:

Do i need to set the rclone_config environment variable somewhere else aswell ?

This is my gmedia-rclone.service file

[Unit]
Description=RClone Service
Wants=network-online.target
After=network-online.target

[Service]
Type=notify
Environment=RCLONE_CONFIG=/opt/rclone/rclone.conf

ExecStart=/usr/bin/rclone mount gcrypt2:Multimedia /home/GDrive \
--allow-other \
--allow-non-empty \
--vfs-cache-mode writes \
--dir-cache-time 96h \
--drive-chunk-size 32M \
--log-level INFO \
--log-file /opt/rclone/logs/rclone.log \
--timeout 1h \
--umask 002 \
--use-mmap \
--rc \
ExecStop=/bin/fusermount -uz /home/GDrive
Restart=on-failure
User=root
Group=root

[Install]
WantedBy=multi-user.target

This is the error I am getting

[root@server system]# systemctl start gmedia-rclone
Job for gmedia-rclone.service failed because the control process exited with error code. See "systemctl status gmedia-rclone.service" and "journalctl -xe" for details.
[root@server system]# systemctl status gmedia-rclone.service
● gmedia-rclone.service - RClone Service
   Loaded: loaded (/usr/lib/systemd/system/gmedia-rclone.service; disabled; vendor preset: disabled)
   Active: failed (Result: start-limit) since Thu 2019-04-25 03:01:01 CEST; 828ms ago
  Process: 25671 ExecStart=/usr/bin/rclone mount gcrypt2:Multimedia /home/GDrive --allow-other --allow-non-empty --vfs-cache-mode writes --dir-cache-time 96h --drive-chunk-size 32M --log-level INFO --log-file /opt/rclone/logs/rclone.log --timeout 1h --umask 002 --use-mmap --rc ExecStop=/bin/fusermount -uz /home/GDrive (code=exited, status=1/FAILURE)
 Main PID: 25671 (code=exited, status=1/FAILURE)

Apr 25 03:01:01 server.domain.name rclone[25671]: 2019/04/25 03:01:01 Fatal error: unknown shorthand flag: 'z' in -z
Apr 25 03:01:01 server.domain.name systemd[1]: Failed to start RClone Service.
Apr 25 03:01:01 server.domain.name systemd[1]: Unit gmedia-rclone.service entered failed state.
Apr 25 03:01:01 server.domain.name systemd[1]: gmedia-rclone.service failed.
Apr 25 03:01:01 server.domain.name systemd[1]: gmedia-rclone.service holdoff time over, scheduling restart.
Apr 25 03:01:01 server.domain.name systemd[1]: Stopped RClone Service.
Apr 25 03:01:01 server.domain.name systemd[1]: start request repeated too quickly for gmedia-rclone.service
Apr 25 03:01:01 server.domain.name systemd[1]: Failed to start RClone Service.
Apr 25 03:01:01 server.domain.name systemd[1]: Unit gmedia-rclone.service entered failed state.
Apr 25 03:01:01 server.domain.name systemd[1]: gmedia-rclone.service failed.

What did I miss ?

The rclone.log file shows this:

2019/04/25 02:53:46 Fatal error: failed to mount FUSE fs: fusermount: exec: "fusermount": executable file not found in $PATH

and also this

2019/04/25 03:03:44 NOTICE: Config file "/root/.config/rclone/rclone.conf" not found - using defaults
2019/04/25 03:03:44 Failed to create file system for "gcrypt2:Multimedia": didn't find section in config file

How do I install fusermount on CentOS?
Do i need to use: yum install fuse or yum install fuse-sshfs which one is the correct one ?

If you aren’t using my settings, best to start a new thread as well as your mount commands and such aren’t my settings.

yum install fuse is your command.

This is awesome.

I think this is exactly what I need, but I'm going to type out loud for a bit just to be sure I understand your setup correctly.

Your /GD directory is a mount for your gcrypt remote directory.

You're using mergerfs to merge /data/local and /GD into a single /gmedia directory.

Your Sonarr/Radarr points to this /gmedia directory and will write to the first location specified in mergerfs. In this case, it will write to /data/local first.

Your upload_cloud script will move all files from /data/local to your gcrypt remote location at the specified interval, excluding patterns specified in /opt/rclone/scripts/excludes.

I'm a bit unclear on what your gmedia-find.service does. Not very familiar with caching in this context and the rclone rc vfs/refresh command.

You got it. The find is just an extra and not needed but it does basically a refresh of the files by doing a find so the first time plex runs, it already has the directory/file structure in memory so it goes fast on the first scan.

Excellent! Now to think about how I will migrate over my existing shows.

I have shows saved to multiple HDs at the moment (/media/Shows 01, /media/Shows 02, etc.).

Would it make sense to do:

  1. /usr/bin/rclone copy /media/Shows 01/ gcrypt: -P --checkers 3 --log-file /opt/rclone/logs/upload.log -v --tpslimit 3 --transfers 3 --drive-chunk-size 32M --exclude-from /opt/rclone/scripts/excludes --delete-empty-src-dirs
  2. Once all files in /media/Shows 01 have been copied over to gcrypt remote location, mass update shows previously pointing to /media/Shows 01 in Sonarr to instead monitor /gmedia/Shows/show_name.

Yep. That's pretty much what I do and I can confirm it works :slight_smile:

Question, in your readme it says:

They all get mounted up via my systemd scripts for gmedia-service.

This file no longer exists in the repo. Is there another method you use to do the rclone mount, mergerfs mount, and find command in order?

Edit: never mind, I see that you moved it into the other scripts using the After= directive.

@Animosity022 I've been using some of your settings for a while and they've been flawless- thank you.

Quick question - is there anyway to utilise "Scan my library automatically" or "Run a partial scan when changes are detected" in Plex using your rclone settings?

Reason I ask, is I have Bazarr adding subtitles - but Plex won't pick them up unless I manually refresh metadata.

If I turn on "Scan my library automatically" or "Run a partial scan" it doesn't seem to ever pick anything up.

So my setup with mergerfs should pick up changes and scan things. I tested by copying a file over just now:

May 19, 2019 20:23:40.500 [0x7f0d71ffb700] INFO - Library section 2 (TV Shows) will be updated because of a change in '"/gmedia/TV/The 100"/The.100.S05E01.en.forced.srt'

I have scan when changes detected and partial turned on and seems to work. I've never tried Bazaar since it doesn't quite do forced only yet. Once that releases, I'm going to give it a whirl again since SubZero will eventually go away.

Do you run Plex on the same server?

Mine definitely isn't picking up any changes with those settings turned on in Plex

Yep. I run everything on the same server. I use mergerfs though as well as though as I'm like 99% that is why I'm seeing it as I write everything local first and it gets uploaded to my rclone mount (which doesn't support notify) later.

Ahh that'd be it.

I run my Plex server on a seperate machine - reading a number of rclone mounts in a mergerfs mount