I use a local disk called data/local for temporary storage
I use a /GD for my GD encrypted storage
I write everything to a mergerfs mount called /gmedia which contains Sonarr/Radarr/Torrents/all my movies/TV shows. I do that as it will support hard linking for anything I download as long as it's all on the same file system
I do not sync my torrent folder to the cloud
I moved my scripts/commands/etc over to github repo to make things a bit cleaner and keep all my stuff there.
Very interesting. Where did you read about auto_cache? From looking at the man page it doesn't look that useful
auto_cache
This option enables automatic flushing of the data cache on open(2). The cache will
only be flushed if the modification time or the size of the file has changed.
Using -tags cmount means that rclone will link to a C library and that means I'd need to set up a cross compile tool chain for each supported OS
Maybe I should make an linux/amd64 build with cmount - that would be relatively easy to fit in the build process. It can't be the default though as it needs the libfuse library and I don't want to break the "no dependencies" part of rclone.
Yeah, it seems to deal more with the flushing aspect than the actual caching. From looking at the documentation, the kernel_cache looks like a better option:
From what I can see in the API docs, there seems to be some sort of cache but no explicit documentation exists regarding the options.
I run a similar setup and if say for example some uploads fail due to the daily limit being reached, they've stayed in the temp writes folder and retry. But a few have failed due to I guess lost connection or certain chunks failing and that didn't resume, it had to start over. It only happens sometimes to really large files. I've noticed that by forcing ipv4 I get less timeouts which may be better with Google servers.
I couldn't get these to work (unraid rclone plugin user), maybe because I'm still using a unionfs mount??? I think I'm going to stay with my offline rclone upload job anyway as 1 failed/lost upload will probably be my most vital file!
Yeah, I made a few changes as I was updating based on the better clarification.
I have a gigabit pipe so I was doing a bit more testing and 16MB chunks seemed to be a better spot for an all around number for folks that might not be lucky enough for gigabit FIOS to their house
Since I have plenty of memory, I adjusted the caps out for max size and buffer to match.
I was running plexdrive with no encryption. Start times took aprox 5-10 secs.
My new setup is rclone with all data is encrypted on Googledrive - testet this morning after my plex libraries where done.
A normal movie started instantly
With my set up, I keep files younger than 30 days local until a disk usage threshold is met then an rclone move script uploads older files so I don’t need to write to my gdrive mount. Can this still be used as read only? If not, what need to be changed?