Here is my Systemd service file.
I have not been able to spot what cause the last scan. I guess it is related to a deep scan from either plex, radarr or sonarr. I have no other service on my server.
For your amazing openvpn config, is there a way to achieve the same thing using ufw and iptables?
I could disable the WAN access with ufw, but if install your configs, it connects and everything seems to be okay however the user still have no connection.
Iām not as familiar with UFW but I would assume itās possible.
The use case of the VPN config is:
have a user called āvpnā
have that user route all traffic out the VPN interface only
I donāt think it matters much if you have a local network or not as I just have a few local rules for blocking my LAN stuff and the Plex server itself from taking connections on 32400.
I had that one line to allow my box to talk to my router that you commented out but it can removed as you noted.
Iām not sure though if UFW allows for user specific rules or not.
UFW is just an interface to modify iptables, so it should be possible. It only extracts the already existing rules on startup and add its own ones. But those own ones are being translated into iptables commands therefore the result is the same. Itās just an easier way of managing common rules.
However for some reason it still not works. I am now wondering what can be the issue.
Would you mind executing this? echo $(ifconfig tun0 | egrep -o '([0-9]{1,3}\.){3}[0-9]{1,3}' | egrep -v '255|(127\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3})' | tail -n1)
took it from the routing.sh. It outputs 0.0.0.0 for me.
Iām running Plex on Ubuntu 16.04.5 LTS in a VM running under VMware Fusion on an i7 Hackintosh running High Sierra. Iām mounting my Google Drive using rclone.
After reading through this thread & other similar ones I just did some tweaking to the rclone mount parameters which has improved performance enormously. Previously it could take 20-30 seconds for a file to start playing with both Plex & Emby. With my new mount command it now takes 3-5 seconds. Iām not sure if all the parameters are necessary or whether there are any others that would improve performance even more but there has been such a dramatic improvement that I thought it worth sharing so here is the rclone.service script that I am now using:-
[Unit]
Description=Mount and cache Google drive to /mnt/Gdrive
After=syslog.target local-fs.target network.target
[Service]
Type=simple
User=root
ExecStartPre=-/bin/mkdir /mnt/Gdrive
So clearly buffer-size & vfs-read-chunk-size are now much larger.
Iām running Plex in an Ubuntu VM under VMware Fusion on an i7 Hackintosh running High Sierra. 4GB of RAM is allocated to the VM out of the total 16GB of RAM available to the system. I donāt really use the system for anything other than Plex & occasionally Emby. Itās only serving to local clients either an AppleTV 4K or an Amazon FireTV 4K but only ever one client at a time.
Iām really pleased with this performance breakthrough as while Iāve always been pretty happy with mounting Google Drive on my local system but found irksome the 20-30 second delay before a file would start playing especially as it was often accompanied by freezing & stuttering for another 20-30 seconds before the file would then carry on playing smoothly. Basically I must have had it misconfigured all along & now itās performing mucg closer to optimum but I suspect that I could allocate more RAM to the VM & tweak the mount command to allocate a lot more buffers or cache or whatever to rclone & improve performance even more. Does anyone have any suggestions?
I just looked at the rclone docs page describing how to set this up & it looks like Google as usual have been changing their webpages & interface so what I see bears little or no relation to what is described in the rclone docs so I cannot figure out how to create my own ID for rclone.
Can anyone provide a step by step guide to the current method of setting this up?
Can you confirm these look to be good @nigelbb and Iāll send a pull request to update that part of the page.
1. Log into the [Google API
Console](https://console.developers.google.com/) with your Google
account. It doesn't matter what Google account you use. (It need not
be the same account as the Google Drive you want to access)
2. Select a project or create a new project.
3. Under "ENABLE APIS AND SERVICES" search for "Drive", and enable the
then "Google Drive API".
4. Click "APIs & Service -> Credentials" in the left-side panel.
5. Click on "Create Credentials and select "OAuth client ID". It will prompt you to set the OAuth consent screen product name, if you haven't set one already.
5. Choose an application type of "Other", and click "Create" using any Name as "rclone" would be fine.
6. It will show you a client ID and client secret. Use these values
in rclone config to add a new remote or edit an existing remote.
Sorry I just walked through this & almost none of your instructions are correct (at least for my account when I log in). Itās kind of similar but different enough that your instructions are impossible to follow although I think that I have succeeded purely by accident & would hate to have to try & repeat the exercise.
The word project does not appear anywhere on the page.
There is however a tab āENABLE APIS AND SERVICESā & if I click on that I can enable the āGoogle Drive APIā
āAPIs & Service -> Credentialsā is not present in the left-side panel.
If I click on a blue button labelled MANAGE I then find āCredentialsā as the last item in the list in the left-side panel.
If I click on āCredentialsā I get
A button labelled ā+ CREATE CREDENTIALā
"Credentials compatible with this API
To view all credentials or create new credentials visit [Credentials in APIs & Services]"
I did luck out & eventually arrived at a screen where I found displayed my Client ID & Client secret but have no idea how I got there or any way of retracing my steps to find it again. Also it displayed this message along with ID & secret:-
OAuth is limited to 100 [sensitive scope logins]until the [OAuth consent screen]is published. This may require a verification process that can take several days.
Why do Google make it so mind-blowingly complex to access their applications? The UI is just awful & why do they keep changing it so when you want to repeat the operation you did last month you canāt because the UI changed. I am an experienced developer & system administrator but this stuff gets me tied in knots.
BTW thanks for your help. My complaints are with Google not you:grinning: