What is the problem you are having with rclone?
No problem with Rclone - The actual problem if anything is that its got new features I'm not familiar with and haven't had to learn about because its worked so well lol.
Preface: A few years back, I setup a box with the use of a 'docker setup' service many of you probably have heard of called PG. It didn't work well enough, and I stumbled upon this direct forum, and the guy thats super popular - Animosity022. Using his scripts, I setup what I needed and it worked FLAWLESSLY for years. Until now...enter synchronous gigabit fiber to the home. lol.
Now I started running into two issues (neither were to do with rclone) - one is that my bandwidth was too much for the small hardware box to run with my media server transcoding, and also utilizing a VPN connection for downloading ISOs.
The other was a certain limit of 750GB/24 hours, which was easy to hit now.
Enter solutions.
I came back to the forums to research and found that Animosity has made quite a few tweaks to his stuff. I started doing some research and still like his idea the best, which is not to live-upload (downloader copying directly into the rclone mount...some friends and colleagues use this, and it works well enough for them) but rather to upload on a script cycle. I have a few reasons for this :
-This will allow a single 'mount' to serve the data to my media server, without issue with that rclone remote.
-This will allow use of several rclone remotes configured to run the 'move' command, throughout the day, cron style.
As I start trying to architect this solution, I now realize there are things similar to this 'pause before uploading' built into rclone that (may not be?) are new...since I last researched, in 2019. For example, I see some using vfs-cache and some commands with vfs that will auto hold the files, storing them in a cache directory then uploading on a time-after-edit usecase. This was an approach I was going to go down, until (i think? was testing last night) this caused my downloader to crash several times, my remote/rclone service to kill itself, and etc..etc..etc... lol
So my troubleshooting today is going to lead me to the idea of this>
One box running docker. In it, an rclone mount to /gdrive. Additionally, a mergerfs between it, and a location for staged files, waiting to be uploaded. This merger fs will be something like /glocal:/gdrive -> /gmedia. This is how I started with animosity's stuff 3 years ago.
This is where my head hits a 'hopefully some people have tried some of this stuff before, run into issues and I'm not going to try to reinvent the wheel' because my thought process ORIGINALLY was to serve the merged /gmedia over NFS, to another box also running docker, which consisted of my media server application (starts with a P :)). This is EXACTLY how I have this stuff setup now, but with no NFS share, all of this is handled on one box, so the mergerfs is a 'local' directory passed through to docker from within itself.
I don't know if presenting this over the network over NFS is going to be a super bottleneck, or if there are certain settings to give to rclone mount to mitigate this....or even to the individual applications. (transmission, the *arr's, and pl...)
This 'downloader docker' box will then run a series of scripts, probably via cron, to utilize rclone move, with a bandwidth limit on the command, that will use remote1 at 0100 to move ~600GB, then exit...remote2 at 0700 to move ~600G, then exit....remote3 at 1200 to move ~600GB then exit. each of these scripts will probably have a bit of logic built in to run a little more frequently, and only check to see if rclone is running then holdoff til the next run. I doubt I'll ever hit more than 2TB in a day.
It seems, at least to my perspective, I have a really good handle on how to utilize the move scripts and the scheduling around that, but not so much to handle the fact that a different box is serving the media to another box over NFS. The biggest problem I think I have is that the *arr's need to move it locally, and for them to move it locally, pl3x needs to be made aware of where they get moved to.
Either I mount the /glocal and /gdrive both as NFS on another box as well, and mergerfs the NFS mounts there AS WELL AS on the host box...or just serve the /gmedia over NFS...but that might have some serious performance issues?
Solutions ? Thoughts ? Call me a crazy person ?
Run the command 'rclone version' and share the full output of the command.
rclone v1.59.0-beta.6078.bab91e440
- os/version: ubuntu 20.04 (64 bit)
- os/kernel: 5.4.0-107-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.18.1
- go/linking: static
- go/tags: none
Which cloud storage system are you using? (eg Google Drive)
GDrive - Teams now.
The command you were trying to run (eg rclone copy /tmp remote:tmp
)
This is not actually applicable, however this is the series of things I was testing last night.
/usr/bin/rclone mount tcrypt: /gdrive
--allow-other
--allow-root
--dir-cache-time 48h
--log-level INFO
--log-file /opt/rclone/logs/rclone.log
--config /opt/rclone/rclone.conf
--poll-interval 15s
--fast-list
--umask 002
--vfs-cache-mode full
--vfs-cache-max-size 10G
--vfs-cache-max-age 336h
--vfs-write-back 300s
--vfs-read-chunk-size-limit 500M
--cache-dir /p1r4t3/.cache/
--allow-non-empty
--rc
--rc-addr 127.0.0.1:5572