What is the problem you are having with rclone?
Well, first I'd like to say thanks for the quick response and solution to my problem yesterday. As a result, I've decided to try and lay a bigger problem I'm having to see how you guys do with that. I do not doubt you're up for the challenge. I apologize in advance if you go through this lengthy read.
I have a VPS. Currently, I am using mergerFS and some of Animosity's home scripts (specifically a customized version of
/scripts/upload_cloud) to offload completed downloads to the appropriate place in my encrypted cloud storage. However, I still have the problem that stagnant incomplete downloads, as well as those that are seeding, taking up my limited disk space (1200 GB, around 200GB of which is being used by the OS, an unnecessarily large swap file [26GB], and Plex\Sonarr\Radarr metadata).
qbittorrent-nox v126.96.36.199 (If I'm not mistaken, this seems to be the newest version for xenial) . When I torrent with files on the cloud (locally works fine), seeding and incomplete torrents are problematic:
Incomplete downloads will error out and require me to move all the files to a local location, and do a forced re-check.
Re-checking torrents is abysmally slow. (This was expected, but it's horrendous at the moment)
Seeding torrents start briefly, send a tiny bit of data, then stall out.
My API hits seem very high. Between 1-2 million a day (Also expected, there is a lot of content) [Is that even that high? What's the temp-ban limit on API hits?]
It's my understanding after some research that this should be functional either using a cache remote, or using VFS-cache (which I thought I already was properly, but I believe I'm mistaken). I am uploading everything to the cloud to prepare space for testing these methods. In the meantime let's figure out my testing setup.
Have a mounted remote for both a cache and a VFS-cache for testing purposes.
Use the same encryption passwords on both. So I can server-side move folders between them.
Stream media well. It already does this via Plex with my current settings. Might just keep that mount as it is. It is cache-less (from RAM\swap I assume?).
Seed a good amount of torrents simultaneously. Let's say 200, the majority of which are inactive or small (10-100mb).
Download a reasonable amount of torrents at a time. Perhaps, 10 active downloads. I could possibly use qbittorrent settings to make it so only a couple of torrents are considered active if over a certain download speed. This might allow too many 'inactive' torrents to want to use up cache space though.
Keep an unlimited amount of incompleted torrent data stored on the cloud when stagnant (Please push your stagnancy off my poor little drive thank you very much). Probably after a certain amount of inactivity. Which could then be pulled back down and stored locally if they become active again.
upload_cloudto push completed content to the cloud, as I use a team drive and service accounts in it rather than always using the quota of my main account.
gdrive:/.cache < gcache: < ccrypt:
[gcache] type = cache remote = gdrive:/.cache plex_url = http://127.0.0.1:32400 plex_username = aethaeran plex_password = REDACTED chunk_size = 10M info_age = 2d chunk_total_size = 300G plex_token = REDACTED
If I'm correct, my errors have to do with my rclone mount settings. Specifically, I think it's because I have been not using
--vfs-cache-mode or the other following settings:
--cache-dir string Directory rclone will use for caching. --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s) --vfs-write-back duration Time to writeback files after last use when using cache. (default 5s)
What I think I should add to my service:
--cache-dir /mnt/vfs_cache \ --vfs-cache-mode full \ --vfs-cache-max-age 336h \ --vfs-cache-max-size 300G \
Overlay of my directories
- /mnt/gmedia/.downloads/qbittorrent/incomplete # Torrents in process, which I want to stay in the cache
- /mnt/gmedia/.downloads/qbittorrent/complete # Torrents completed. Need seeding, but would rather offload with service accounts.
Plex > /mnt/gmedia
- Sonarr >HardLinks> /mnt/gmedia/Videos/Television
- Radarr >HardLinks> /mnt/gmedia/Videos/Movies
Yes, due to the hard-linking this means I'm uploading any Plex content to the cloud at least twice. This is intended. In order to retain a copy with their original filenames and structure.
There is this bit in Animosity's upload_cloud about not pointing at cloud folders because it can create a loop. I'm assuming this is avoided if we are moving between a local mount to a remote OR two separate remotes. I certainly don't see it happening.
Here is a snippet from my upload_cloud
/usr/bin/rclone move \ /mnt/gcrypt/.downloads/ tcrypt00: \ -vP \ --exclude-from /opt/scripts/encrypt_excludes \ --delete-empty-src-dirs \ --drive-stop-on-upload-limit \ --max-transfer 749G
It's important I point out
/opt/scripts/upload_excludes at this time
$RECYCLE.BIN/** System Volume Information/** $AV_ASW/** msdownld.tmp/** *partial~ .downloads/qbittorrent/incomplete/** .downloads/nzbget/intermediate/**
Then I just server-side move back all team drive contents:
/usr/bin/rclone move \ tdrive:/.obfuscated gdrive:/.obfuscated \ --drive-server-side-across-configs \ --log-file /logs/transfer.log \ --verbose 3 \ -P \ --drive-stop-on-upload-limit \ --max-transfer 749G \ --delete-empty-src-dirs
Which OS you are using and how many bits (eg Windows 7, 64 bit)
Description: Ubuntu 16.04.6 LTS Release: 16.04 Codename: xenial
What is your rclone version (output from
rclone v1.54.0 - os/arch: linux/amd64 - go version: go1.15.7
Which cloud storage system are you using? (eg Google Drive)
7 / Cache a remote \ "cache" 11 / Encrypt/Decrypt a remote \ "crypt" 15 / Google Drive \ "drive"
The rclone config contents with secrets removed.
[gdrive] type = drive scope = drive token = REDACTED client_id = REDACTED client_secret = REDACTED root_folder_id = ROOT [gcrypt] type = crypt remote = gdrive:/.obfuscated filename_encryption = standard directory_name_encryption = true password = REDACTED password2 = REDACTED [tdrive] type = drive token = REDACTED team_drive = TEAM_DRIVE_ID [tcrypt] type = crypt remote = tdrive:.obfuscated password = REDACTED password2 = REDACTED [gcache] type = cache remote = gdrive:/.cache plex_url = http://127.0.0.1:32400 plex_username = aethaeran plex_password = REDACTED chunk_size = 10M info_age = 2d chunk_total_size = 300G plex_token = REDACTED [ccrypt] type = crypt remote = gcache: filename_encryption = standard directory_name_encryption = true password = REDACTED password2 = REDACTED [tdrive01] type = drive service_account_file = /opt/rclone/service_accounts/user01 team_drive = TEAM_DRIVE_ID [tcrypt01] type = crypt remote = tdrive01:.obfuscated password = REDACTED password2 = REDACTED
[Unit] Description=RClone Service (gcrypt) Wants=network-online.target After=network-online.target AssertPathIsDirectory=/mnt/gcrypt [Service] Type=notify Environment=RCLONE_CONFIG=/root/.config/rclone/rclone.conf KillMode=none RestartSec=5 ExecStart=/usr/bin/rclone mount \ --config=/root/.config/rclone/rclone.conf \ --allow-other \ --dir-cache-time 1460h \ --log-level INFO \ --log-file /logs/gcrypt.log \ --poll-interval 15s \ --uid 1000 \ --gid 1000 \ --umask 000 \ --dir-perms 0777 \ --file-perms 0777 \ --user-agent aethaeran \ --rc \ --rc-addr :5573 \ --vfs-read-chunk-size 32M \ gcrypt:/ /mnt/gcrypt ExecStop=/bin/fusermount -uz /mnt/gcrypt ExecStartPost=/usr/bin/rclone rc vfs/refresh recursive=true --rc-addr 127.0.0.1:5573 _async=true Restart=on-failure [Install] WantedBy=multi-user.target
[Unit] Description=RClone Service (ccrypt) Wants=network-online.target After=network-online.target AssertPathIsDirectory=/mnt/ccrypt [Service] Type=notify Environment=RCLONE_CONFIG=/root/.config/rclone/rclone.conf KillMode=none RestartSec=5 ExecStart=/usr/bin/rclone mount \ --config=/root/.config/rclone/rclone.conf \ --allow-other \ --dir-cache-time 1460h \ --log-level INFO \ --log-file /logs/ccrypt.log \ --poll-interval 15s \ --uid 1000 \ --gid 1000 \ --umask 000 \ --dir-perms 0777 \ --file-perms 0777 \ --user-agent aethaeran \ --rc \ --rc-addr :5577 \ ccrypt:/ /mnt/ccrypt ExecStop=/bin/fusermount -uz /mnt/ccrypt ExecStartPost=/usr/bin/rclone rc vfs/refresh recursive=true --rc-addr 127.0.0.1:5577 _async=true Restart=on-failure
[Unit] Description=MergerFS Mount (gmedia) Wants=network-online.target gdrive.service ccrypt.service gcrypt.service tcrypt.service After=network-online.target gdrive.service ccrypt.service gcrypt.service tcrypt.service [Service] Type=forking ExecStart=/usr/bin/mergerfs \ /mnt/ccrypt:/mnt/gcrypt:/mnt/tcrypt:/mnt/gdrive \ /mnt/gmedia \ -o rw,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=auto-full KillMode=process Restart=on-failure [Install] WantedBy=multi-user.target
(I will not pretend to have memorized all of this, but at least I skimmed)
https://rclone.org/docs/ https://rclone.org/commands/rclone_mount/ https://rclone.org/cache/ https://github.com/animosity22/homescripts https://github.com/animosity22/homescripts/blob/master/systemd/rclone.service https://github.com/animosity22/homescripts/blob/master/systemd/gmedia.service https://github.com/animosity22/homescripts/blob/master/scripts/upload_cloud https://forum.rclone.org/t/asking-about-seeding-from-google-drive-storage/17551 https://forum.rclone.org/t/help-setting-up-rclone-gdrive-gcatche-gcrypt-with-seedbox/13117 https://forum.rclone.org/t/permaseed-with-rclone/8316
TL;DR - My Questions:
Why does a cache remote need information about my plex?
Where does a cache remote store files if it isn't defined?
You can set-up VFS-cache and a cache remote to function arguably the same. Load up to X GBs of data before starting to toss the least recently accessed first? Correct? If yes, this should have both of them capped at 300GB correct?
Is my variation of
upload_cloudgoing to interfere with anything? I feel like it will but my brain can't take anymore today.
gdrive:/.obfuscatedeven NEED to differ from
gdrive:/.cache? Could both of those be pointing to the same spot?
I can't believe I'm going to open this can of worms but which is better for my ideal set-up? cache of VFS?
Hope this all makes sense to someone. Don't scratch your head to hard, feel free to ask if I've left anything out.