Recommended Dropbox (Formally Google Drive) and Plex Mount Settings

Hi, thx so much for the settings.

However, I've encountered some issues with getting files.

I have one machine doing downloading and uploading using the same setup (mergerfs, crypt, and GD remote).
And another machine running plex with crypt and GD remote.

I've noticed some files were uploaded to the cloud, but could not be seen on both machine. The files only appear after un-mount and mount again.

[Unit]
Description=RClone Service
Wants=network-online.target
After=network-online.target

[Service]
Type=notify
Environment=RCLONE_CONFIG=/root/.config/rclone/rclone.conf
KillMode=none
RestartSec=60
ExecStart=/usr/bin/rclone mount gcrypt: /GD \
--allow-other \
--dir-cache-time 1000h \
--log-level INFO \
--log-file /opt/rclone/logs/rclone.log \
--poll-interval 15s \
--umask 0000 \
--rc \
--rc-addr :5572 \
--rc-no-auth \
--cache-dir=/cache \
--vfs-cache-mode full \
--vfs-cache-max-size 50G \
--vfs-cache-max-age 336h 
ExecStop=/bin/fusermount -uz /GD
ExecStartPost=/usr/bin/rclone rc vfs/refresh recursive=true --rc-addr 127.0.0.1:5572 _async=true
Restart=on-failure
User=root
Group=root

[Install]
WantedBy=multi-user.target

Can you help with this?
Many thx :slight_smile:

Sure, start a new post and use the help and support template.

Hey Animosity022, got a quick question from your setup. I noticed you are using a 1TB SSD for rclone disk catching using vfs-full. Have you noticed any premature degradation on your setup?

I have a very similar setup to yours and I see north of 30GB written daily to the SSD from rclone cache alone.

Thanks again for sharing your setup and keeping it updated!

@spotter had a good post about that and in relation to fragmentation and that got me to check the 1TB SSD you speak of. With my current use and the lifespan of the drive I got, it's going to last ~4.2 years based on my use of it over the last ~7 months.

My 6TB spinning disk started to show bad sectors yesterday after 5 years of use so that seems pretty good to me overall as that was a desktop drive as well that's been on for 5 years.

I'd suggest to use spinning disk if you are very concerned but if ~4 years seems fine, just use the SSD. I figure for ~70, 4 years of constant use is an ok investment for me if it fails exactly when it is supposed to.

I'd be concerned using an SSD as a cache device without the expectation that they can wear out. consumer SSDs aren't meant to be cache devices on write heavy workloads. I agree with the logic above that if a 4 year plan is good enough for you, then it should be fine, but one has to understand this.

With that said, my latest thread is me wondering if I can get away with a smaller cache that is all in ram (i.e. tmpfs in linux) to avoid this issue and avoid spinning disks performance issues. My spinning disc performance is also negatively impacted as its on a raid5. raid5 on a fragmented files is terrible for performance, especially if deleting files causes them to be zeroed out.

1 Like

I agree, this might catch people off guard. Not everybody checks their SSD SMART numbers and will just learn his drive is done out of the blue. Even worse if they are sharing it with the OS/Plex, as best case scenario they are left with a read only drive.

Since cache is designed to hold the content that was last read, it's difficult to calculate. I would see increased value if we could ask for newest content to stay local (kinda like Animosity handles his mergefs. For some people it might be better to remove the vfs cache and use old setup without cache.

You should be able to use a RAM disk and point it as your cache. Yet you should have enough RAM for this as assigning 4-8GB won't do much difference. So it would depend on your setup and available RAM.

per what was described, the cache has to be able to fit the entire file, as otherwise will have problems, so has to be larger than the largest file you expect. I have 48GB of ram, but that's not good enough for my use case where my largest file might be ~100GB.

1 Like

Hi @Animosity022!

After reading through the entire thread, I'm really looking at implementing a part of this setup. Looking at this post:

A photo is provided in that post that shows a diagram of how your setup works.

Since I have a completely separate machine in charge of uploading to my GDrive, the only portion that would interest me for my PLEX setup would be the "rclone Crypt Layer (mounted to /GD)". To be clear, I don't want any of the VPN stuff, any of the proxy stuff... just the PLEX crypt access. My library is uploaded overnight, and it's only for home access.

Following the documentation on your github page, am I correct in assuming that the only scripts I need are:

  • the gmedia.service systemd script, and;
  • the rclone.service systemd script

...along with all required setups for mergerfs?

Even though the only contents of the mergerfs mount would be the rclone crypt mount, I think this would be the easiest way to use your script(s) without massive changes.

Thanks again for the hard work and continued support in this thread. It's very helpful!

Edit: to clarify, I've created clientID / API Keys on all devices using rclone to interact with the gdrive crypt mount I'm currently using. However, only one device would be using the rclone mount function to stream content. All other devices are upload nodes.

1 Like

That seems correct to me. Let me know if you get any questions!

1 Like

Will do! If it ends up working, I'll probably provide a how-to guide that I'll cross-post to your github wiki content. That way, you may be able to direct people there for some type of support.

Again, awesome work!

How do you manage files that are seeding (and have been hard-linked), with your upload script?

I don't upload from my seed area only from my 'completed' area.

@Animosity022, I've been following your guide for a while now, and I will try to check back periodically to see if any changes have been made. I noticed that (somewhat) recently you added some additional commands to your rclone settings specific to the VFS cache. I believe I understand the purpose behind using that, but I wanted to ask one clarifying question. Having that enabled would really only be beneficial if the system running rclone is on your local network, correct? I'm running everything on a remotely hosted seedbox, so the only difference for me would be whether it's cached on the seedbox drive vs accessed through my Google Drive. Unless this could somehow help with API limit bans, I don't think I would need it for my use case. Just wanted to make sure I wasn't missing something..

Which setting are you asking about? I think majority are beneficial regardless of the location as they handle specific situations.

If you have internet issues between client <-> server, nothing can really fix that.

Sorry, the settings I’m referring to are below.

# The local disk used for caching
--cache-dir=/cache \
# This is used for caching files to local disk for    streaming
--vfs-cache-mode full \
# This limits the cache size to the value below
--vfs-cache-max-size 800G \
# This adds a little buffer for read ahead
--vfs-read-ahead 256M \
# This limits the age in the cache if the size is reached and it removes the oldest files first
--vfs-cache-max-age 1000h \
# This sets a per file bandwidth control and I limit this to a little bigger than my largest bitrate I'd want to play

From trying out the settings, it looks like it caches files from my gdrive to my seedbox drive. After thinking about it, I wasn’t sure what advantage this gives me since my seedbox is also a remote drive. Only potential upside I could see would be if it somehow helped with API bans, but I’m not sure how it would do that. I’ve never had issues with connection speed either from my home network to my seedbox or from my seedbox to my gdrive (other than the previously mentioned occasional API ban).

I don't see any reason not to use the cache regardless of where it is. I'd give it a try and see how it works. By reading ahead and caching, you remove a lot of variables and keep things generally smoother.

Is there a reason you’re still using mergerfs and your upload cloud script instead of the new VFS mode options (and the writeback delay option)? I definitely trust your expertise, but I’m just trying to understand some of these new (to me) settings.

I use hard links with Sonarr and Radar.

Sorry for my ignorance, but do hard links not work with the vfs cache?

No, they don't.

felix@gemini:/GD$ touch test1
felix@gemini:/GD$ ln test1 test2
ln: failed to create hard link 'test2' => 'test1': Function not implemented
felix@gemini:/GD$