Recommended Dropbox (Formally Google Drive) and Plex Mount Settings

If you are using Google Drive, setup your own client ID/API Key - This is critical!

Google drive

My use case:

  • I use a local disk called data/local for temporary storage
  • I use a /GD for my GD encrypted storage
  • I write everything to a mergerfs mount called /gmedia which contains Sonarr/Radarr/Torrents/all my movies/TV shows. I do that as it will support hard linking for anything I download as long as it's all on the same file system
  • I do not sync my torrent folder to the cloud

I moved my scripts/commands/etc over to github repo to make things a bit cleaner and keep all my stuff there.

GitHub - animosity22/homescripts: My Scripts for Plex / Emby with Dropbox and rclone - has all my current commands and I'm working on a readme to better explain my thought process.

Please see the github for updates so I only maintain it in one place.

14 Likes

This looks very interesting @Animosity022. Thanks for testing it out. Any suggestion for handling failed uploads in this setup?

@ncw Why is the cmount option not included by default in the linux builds? Any specific reasons or issues with using it?

I’ve honestly never hit an issue with failed handling to this point.

If it became a problem, I guess you could just rclone move it after it finished the download to get past that.

If my internet was down and it couldn’t upload, odds are, I’d never download the file anyway.

Very interesting. Where did you read about auto_cache? From looking at the man page it doesn’t look that useful

auto_cache
          This option enables automatic flushing of the data cache on open(2). The cache will
          only be flushed if the modification time or the size of the file has changed.

Using -tags cmount means that rclone will link to a C library and that means I’d need to set up a cross compile tool chain for each supported OS :frowning:

Maybe I should make an linux/amd64 build with cmount - that would be relatively easy to fit in the build process. It can’t be the default though as it needs the libfuse library and I don’t want to break the “no dependencies” part of rclone.

It is probably possible to add auto_cache to mount using the https://github.com/bazil/fuse/ library rclone uses.

Yeah, it seems to deal more with the flushing aspect than the actual caching. From looking at the documentation, the kernel_cache looks like a better option:

From what I can see in the API docs, there seems to be some sort of cache but no explicit documentation exists regarding the options.

I avoided kernel_cache as my thought was if something changes it via rclone upload or something, it would be not a good use case.

auto_cache would seem to flush it if the mod time changed.

it seems like it uses the OS filesystem cache, which keeps the file in memory a bit longer depending on how you have your OS setup.

I did a bit of digging through the code…

It looks like kernel_cache is the same as auto_cache except auto_cache flushes stuff if it changes.

It looks like kernel_cache is implemented like this in bazil FUSE

OpenKeepCache   OpenResponseFlags = 1 << 1 // don't invalidate the data cache on open

That would be really easy to try out…

I did that here

https://beta.rclone.org/branch/v1.42-031-g87d64e7f-fuse-auto_cache/

Let me know what you think!

I think auto_cache is kernel_cache selectively if the file hasn’t changed (but need to dig a bit more in the fuse source)

I fixed my mount as was having some things not working right with buffer and vfs with cmount so I removed it for now.

I run a similar setup and if say for example some uploads fail due to the daily limit being reached, they’ve stayed in the temp writes folder and retry. But a few have failed due to I guess lost connection or certain chunks failing and that didn’t resume, it had to start over. It only happens sometimes to really large files. I’ve noticed that by forcing ipv4 I get less timeouts which may be better with Google servers.

Is that your plex IP?

I have a Linux box that has a bunch of stuff on it.

The .30 interface my non VPN’ed interface. I have a .31 interface that I route all my torrent traffic through a VPN #paranoid.

Excellent - I squeezed out a few more seconds for large files with:

rclone mount --allow-other --dir-cache-time 48h --vfs-read-chunk-size 32M --vfs-read-chunk-size-limit 2G --buffer-size 512M --umask 002 --bind 172.30.12.2 --cache-dir=/mnt/user/rclone/vfs --vfs-cache-mode writes --log-level INFO --stats 1m gdrive_media_vfs: /mnt/disks/rclone_vfs

I couldn’t get these to work (unraid rclone plugin user), maybe because I’m still using a unionfs mount??? I think I’m going to stay with my offline rclone upload job anyway as 1 failed/lost upload will probably be my most vital file!

To get those to work, you had to compile from code. I’m not sure how that would work on the Unraid as I’m a Linux user.

What was your thinking behind this? Also, the 16MB which I think is new.

Thanks

Yeah, I made a few changes as I was updating based on the better clarification.

I have a gigabit pipe so I was doing a bit more testing and 16MB chunks seemed to be a better spot for an all around number for folks that might not be lucky enough for gigabit FIOS to their house :slight_smile:

Since I have plenty of memory, I adjusted the caps out for max size and buffer to match.

@Animosity022
Sorry but I am confused, are you mounting the crypt that is cached but using VFS?

1 Like

VFS is using a similar chunked download for files so you should not banned but it allows for a scaling size of the chunks so it can grow.

dir-cache gives you directory/file name caching based on how long you configure. It basically removes the need to use the cache backend.

Just want to share my setup from Animosity022 .

I was running plexdrive with no encryption. Start times took aprox 5-10 secs.

My new setup is rclone with all data is encrypted on Googledrive - testet this morning after my plex libraries where done.
A normal movie started instantly :slight_smile:

So nice @Animosity022 . Keep up the good work.

Cheers

Morphy

Happy to hear! Thanks for sharing.

With my set up, I keep files younger than 30 days local until a disk usage threshold is met then an rclone move script uploads older files so I don’t need to write to my gdrive mount. Can this still be used as read only? If not, what need to be changed?