I have a conceptual doubt:
I was playing with the official image and I don't understand why works like that.
Rclone is a perfect solution to use it with crontab but as far as I understand you can run the container only when you want to use it.
For example, if I want to copy the files that I received in a FTP server to a Google Drive folder, with the docker image I should add a crontab line in host server running something like:
docker run -ti --name rclone --rm --volume ~/.config/rclone:/config/rclone --volume ~/data:/data:shared --user $(id -u):$(id -g) rclone/rclone:latest copy ftp:/folder1/file.txt gdrive:/newfolder/
So, what's the difference or the benefit to do this, instead to install a pure rclone in the host server to do the same thing as above?
I think that a container should run 24/7, be host agnostic and run alone with their own config, cron, etc to make it fully portable and uses only the host storage if necessary.
Am I missing something?
Have I some misunderstanding concepts?
I want to understand why.
You can do either. Some people prefer the unified approach offered by docker containers with a standard way of install, upgrade etc. It makes sense for running the rclone servers or mount.
However installing rclone on the server directly is fine too and that is what I would do in this case.
Thanks for your answer!
So I undestood how it works. I'm in peace with myself.
Do you know if you will create an official rclone docker version fully agnostic as I said before?
Do you have it in roadmap or similar?
Containers are not meant to be fully fledged VMs. Being just wrappers around linux namespaces, its instead preferred that they run only 1 thing and run it well (as is the unix philosophy). Adding duplicate functionality which is already present in the host just adds additional complexity and increases the size of the final image.
Its also generally preferred to have mountable storage, especially for the config and data folders of an application so they can be modified without needing to access the container directly and needing the basic editor programs within it.
Containers are not meant to be fully fledged VMs.
I'm totally agree with you. In this case, the only service that I would to run is rclone.
In my case, the servers where docker/rclone will be interact are full of jobs in crontab (AIX with 300+ jobs each). We don't want to add more jobs because it's a mess and very difficult to mantain.
We created a virtualized Production Docker server to host all the containers and go converting actual services that are in differents VM's into containers too.
In my job Production crontab is our best friend and we decided to pull the info from the containers and not push to them. This is for the crontab fact that I mentioned before and for simple backup reasons. Doing a backup only for the container, we have all set.
So run crontab in the container with one principal service (rclone in this case) is crucial.
Can you explain?
I mean an image that creates a container running with rclone inside running 24/7, with their own crond.
With this, you will be able to make isolated rclone transfers (for example, like my case) differents services and destinations that has nothing to do with each other. You will be able too to create easy containers backups and running it in minutes in another server in case of disaster.
You will backup one thing (container), and not the container, rclone.conf, install rclone on server, the crontab rules, etc.
I don't know if I explain myself.
EDIT: for my needs, I created some dockerfile and rclone.conf to recreate this. There are simple, but I can share them with you if you wish. Maybe give you some fresh ideas.
This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.