it is not recommened to install vbar on the same machine that you are backing up.
you should install agent
Yeah. That's where I think I funked up.
So, can the backup and recovery software be installed on a regular, windows 10 enterprise machine, that is domain joined. and still function properly as a "backup server"?
Otherwise, I'm just going to do the agent install on the VM and see where I get with that.
yes, you can install vbar on w10.ent.
but to keep it simple, you can install agent on the ws.2019 guest and backup to the synology.
quick and easy.
Good deal. I've got a VMware update to install... just released today, and then I'll reinstall the agent. Thanks for the advice and help.
since you plan to using thestigma's activesync to get the veeam backups to the cloud.
- do not enable ' de-fragment and compact full backup file', as that will trigger a change to the lastest full backup and activesync will have to upload that full backup again.
- do not set a schedule inside the agent. you will want to use task scheduler to create a task.
have that task run a .cmd batch file like so.
"C:\Program Files\Veeam\Endpoint Backup\Veeam.EndPoint.Manager.exe" /backup
C:\data\rclone\scripts\archivesync\archivesync.cmd
if you are feeling adventurous, you can enable VSS support for rclone.exe as per my wiki
Greetings all, and I hate to bother you again thestigma, but does the --exclude #recycle have to be placed at any particular place within the code, for it to work properly?
For example...
%rclonepath%\rclone sync "%sourcepath%" "%destpath%" %flags% --backup-dir="%archivepath%%date%" --create-empty-src-dirs --log-file="%logfilepath%%date%.log" --log-level=%loglevel%
echo
Does the --exclude # go before the sourcepath variable, or after the flags variable?
Thanks in advance for your help.
add it to the flags variable.
also, you should move --create-empty-src-dirs as well
set "flags=--fast-list --progress --drive-chunk-size 64M --create-empty-src-dirs --exclude #recycle"
Thanks for the quick reply.
I've done as you've said, and I'm going to see if it makes any difference with the backup. I'm sure it will.
A nice clean log file, with no errors, thanks to your help.
2019/12/07 12:52:17 INFO :
Transferred: 0 / 0 Bytes, -, 0 Bytes/s, ETA -
Errors: 0
Checks: 54171 / 54171, 100%
Transferred: 0 / 0, -
Elapsed time: 200ms
sure, glad to help
Generally regarding flags, the order does not matter. Not relative to other flags and not relative to the main parameters. They just need to be somewhere in there. Thus it's perfectly fine to put any and all flags in the flags setting variable here.
The only exception is if you use multiple of the same flags, which may apply to things like excludes and includes for example. Then the order may matter when there is overlap between the two.
Just a quick question, before the thread dies or gets archived.
Can I use an ip address, like the synology, say \10.0.0.245\stuff instead of using a mapped drive letter, to get the same results?
I had a weird DNS issue with my synology, where it would prompt me to login, if I didn't go straight to the ip of the device, instead of the \synology\stuff. I had to remap it to \10.0.0.245\stuff, then it worked without fail. It's probably something in pi hole that's causing the issue, but for now, I can work around it.
- you can modify the hosts file on your computer, to override the lame pi hole.
- about the remap issue, the answer is yes.
Thanks for the reply.
I found a youtube video on how to create a "local host" entry in pihole, to resolve my statically assigned devices on the local lan.
Okay, the last, and probably most important question.
How can I / we restore point in time backups, via script?
Like, say, restore files from three days ago back to source, from rclone.
All good backup programs need a good way to easily restore, right?
The best way to test a backup, is to do a file restore, right?
it depends on the script you used to backup the files.
I'm going to jump in super late to add that I have used borgbackup with rclone to significant success. The solution you are trying to build sounds exactly like borg. Borg is essentially a giant Python script (it is written in Python) that does exactly what you're looking for and is maintained.
Both borg and rclone take great care to preserve data integrity, so even interrupted transfers have resumed quite well for me. I know there are other backup utilities that even use rclone as a backend, but I have not tried these as Borg is simple, fast, and does what I want it to do (which sounds like basically the same thing you are doing). It will work via an ssh tunnel directly to a server (such as your Synology). And it will work with rclone mount
. You can even combine the two and rclone mount
a volume on your server haha.
Based on your original question, I recommend against going Local->Server->Cloud, because any error will propagate through the chain. Borg will allow you to run two parallel backups: Local->Server and then Local->Cloud (via rclone mount).
Borg will handle versioning and deduplication for you. It even includes the ability to borg mount [backup date]
just like rclone mount
. You are presented with a local volume that looks like a regular disk, and you can drag and drop if you want to restore single files or directories that way. Or, you can simply say borg extract [backup date]
and it will unpack and copy the entire backup for you, to whatever destination you desire (if you are doing a full restore, for example).
I'm not going to take a position on whether or not Borg will work better than the script mentioned above for your particular use case. I do think, however, that the ability to do borg mount
will greatly simplify restores for you from incremental backups, vis a vis your last question.
To run through your initial questions--
Regarding mounting a remote drive on your Synology, borg will back up over ssh. There is a pre-made package for borg at synocommunity.com
At whatever interval you desire, borg will establish an ssh tunnel, create a new incremental backup, check for integrity, and then close the tunnel.
You mentioned time to run a full backup -- backups are de-duplicated, which means that your machine essentially does an rclone check
against all the files locally and on the server. For everything with hashes that match locally and remotely, borg server-side duplicates those files which haven't changed. Then it proceeds to upload what is different. (it's a little more complicated than that as it doesn't store the files twice--it uses little pointers that say 'file x hasn't changed since [date], but that is completely transparent to the user)
After your original backup, most subsequent backups complete in a fraction of the time, because you are not re-uploading your entire user directory with every backup. This also makes it completely feasible to use on a mobile machine because 1.ssh tunnel and 2.tiny incremental backups that will work over cellular hotspots, cafe wifi, even on an airplane if you must. Even so running borg mount
will display all of your files as they existed at the time of the backup via new uploads and pointers to unchanged files.
There are probably several benefits to a much more developed backup solution for sure. This is just a pretty basic script.
That said - there are some performance and reliability benefits of not having to run the data through a mount though.
it depends on what you want to back up.
if the OP wants to backup a windows server, veeam is the way to go, as it can do a bare metal recovery on the server operating system and files.
borg is not usable for windows computers.
if the OP wants to backup the synology box, the box itself has multiples ways to backup to cloud, and that software is written and maintained by synology.
It runs under WSL
There are, but I think they are outweighed by de-duplication.
That said, I think the more rclone based backup solutions that are out there, the better. So thanks for all your work on this