i love rclone. it is our primary backup tool at our company, for both cloud and local backups. i love how the directories look exactly the same in the cloud as they do locally; except they're now an encrypted bunch of goulash that a data breach wouldn't render risky.
my question is, how would one go about automating a little. right now i'm storing my scripts in a notebook, and at the end of the day i will go in and copy each one into the cli and let it execute and then move on to the next one. we have several directories will 300k plus files, so this is rather time consuming.
i understand that i could create a task scheduler entry and have it execute the scripts at will. but what if one fails? if the scripts keeps executing, the error and reason for the error will be 100's of miles up the cli interface. any suggestions? can it be output in a log? can it be emailed? i feel like rclone is intended as a base for one to build their own backup tool. i'm just using it in its raw format. not sure if that's recommended, but i sure love the way it works
that is what you need to do. rclone is not a backup tool, really just a file copy tool plus some other secondary features.
i wrote a 460+ line python script that runs creates a VSS snapshot, then runs rclone, fastcopy, 7zip using that snapshot and also veeam backups files.
each program creates its own log files.
with rclone, after it has finished, the script will do check a few things.
no log file?
is the length of the log file is zero?
read the script line by line and look for certain things such as 'ERROR`.
since a debug log can be 40MB+, the script will create a rclone.short.log and will filter out certain text.
once rclone will no longer need to scan large remotes for file changes (according to @ncw, that's coming up pretty soon) it will almost be a full featured backup tool (scanning remotes with several hundred thousand files gets you throttled, cause it exceeds the IO operations they let you have at a time)
for now, is there an option, like a flag, i can use to just output the result of a rclone operation. rclone gives a summary at the end of each operation of how many files were checked, how many were transferred, and how many were deleted. if i could have that logged to a single file after every operation so that i can open that and just give the last 10 runs a quick look over and make sure none failed, it would be real smooth. is that an option or would that require some coding?
as i have multiple independent backups using different applications to different destinations, i am not too worried about each and every single file. tho i do use a rclone debug log and scan it for errors each time.
about scanning remotes, i use filters to minimize rclone accessing the remote.
for example, i run a rclone sync on a daily schedule and i use this flag on the source.
only if a source file's modification timestamp is less then 25 hours old, then rclone will try to check the dest. --max-age=25h.
then as needed, i run the sync without that flag which would trigger a full scan of the remote.
in addition, i use wasabi, a s3 clone known for hot storage.
with wasabi, with my 1Gbps fiber optic connection, i do not get throttled and there are no api limits.
i keep the most recent backups there and over time move stuff to aws s3 deep glacier.