I need help creating versioning with crypt to dated folders

Heya......

This sounds like the perfect job for Restic. I don’t have access to my configs right at this moment. But you would be able to setup pretty easily. It will do a snapshot of the data at the OS level. Which once the first backup is done is very quick. I backup around 30g of data from my VPS to pcloud every few hours and it only takes minutes.

This would achieve what your are looking for (dated backup etc) and you could even increase the frequency. I use it to backup a folder that contains applications, configs etc. So if I have a catastrophic failure of my VPS I can just download a copy!! I keep multiple hour copies for a short time, then daily’s and also weeklies. My retention periods are shortish as that is my requirement.... but easily tweaked to hold data for weeks, months, years!!

You can mount the backups as a local mount if you need to restore an individual file, or you can just stream it all out!! The data is in a protected vault so don’t lose that key!!

I am happy to do a quick write up on what I do if people are interested?? It has saved me a couple of times from ID 10 T errors!!

nice script.

  1. there is a possible bug, you do not set the filename.
    set logfile="F:\logfiles"
    and in the sync command you have
    --log-file="%logfile%"

  2. add a variable for flags set flags=--fast-list -P --drive-chunk-size 64M

  3. i format the date as yyyymmdd, makes it easier to sort in file manager.

  4. add the time and date to the filename of the log file, so each run has it own log file.

hello,
it seems that restic is only on version 0.9.5, which is beta correct?

there are many other options that are more mature.

imho, i would not trust my backing up my critical file servers using such software.

Good eye my man (or monkey as it may be) :wink:

  1. is a straight up mistake - opsie :slight_smile:
  2. I guess... nice to have maybe, easy to make
  3. Yea I did consider this briefly. Re-thinking about it I think you are right that it would sort a long archive much better that way. Shame that this also makes the date kind of unintuitive. Maybe I should make it display ****y.**m.**d---**h-**m to retain good human readability? (ie. add a letter identifier to each, reducing confusion). You and me would not be stumped by this, but it's not very intuitive for less technical users if the order is suddenly opposite to the normal. Adding those letter identifier should not impact sorting as they would be static across the board.
  4. Yes, this is a no-brainer improvement.

@Charles_Hayes I will see about making an improvement to this with a day or two - then I will ping you to notify. If you are already using the script your old archive timestamps won't match the new ones (which is not an issue aside from it being messier). Since you won't have that many yet you could probably just manually rename them if that bothered you...

well, even the monkey in me can parse 20191104.1231

the main thing is to have the folder and log files sorted consistently.
otherwise
03.01.2019 would be listed next to 03.01.2020 and that is confusing and not good human readability.

perhaps 2019.11.04_12.31

Agreed, and also that's probably a good compromise at the end there.

@Charles_Hayes As the testing end-user here, feel free to chime in with your opinion. Do you think that is easily readable?

Greetings all,

I just got a chance to "play" with this script.

I could not figure out how to get it to work, without mounting the drive.
Maybe I wasn't holding my mouth right?

Anyway, with some lite tweaking, and some cutting and pasting, I got it to backup.

As far as human readable, the way the directory structure is forming on my backup is a folder, in this format, in the archive folder.

Th 11 ---> 07--->2019.20.0.0.1

Yeah, I can make it out, and what it is doing.

I'm using a windows server, not a linux server.
That's the reason I didn't devote a lot of time or attention to restic to begin with, to be honest.
I've had some weird issues with linux inside of windows using the subsystem... and the machine I am backing up is relatively low on ram... 4 gbs total. So, I don't want to toss a gig or so at a vm / subsystem on this machine, it just isn't the ideal solution in my case.

I honestly have 50, 4tb drives, with "crap" I've accumulated over the years, that I'd like to push up. I just know that, if backblaze was any indication... it would take 3 years to transfer 20 tb... let alone 200TB.

yeah, i agree about restic and linux subsystem.

as a windows user, did i mention how to use VSS with rclone.

if you have any questions about veeam and rclone, let me know your questions.
i use veeam agent and veeam backup and replication, free community edition.
the quickest way to wisdom is to learn from the mistakes of others!
save time, learn from my mistakes...

if you want fast uploads for a good price, check out wasabi, get a free trial and make sure to use their new us-east-2 location.
i have a fios gigabit connection and it get great upload speeds.

1 Like

I had a "nightmare" scenario with Veeam, ages ago, where the hard drive you were restoring to, had to be equal to, or greater than, the storage of the original drive, not the original data.
I had 4, 1 TB drives, in raid 10, so 2TB total storage, 150 gigs of data... that was originally on a 4TB hard drive. Needless to say, the backup refused to push back to the "smaller" drive, even though the space was more than adequate. This was at like 3 am, after a malware infection at a business client, and my first taste of "Veeam" as I was doing this as a favor for a friend. Odd thing is, there's nobody with 4 or 8 tb hard drives laying around at 3am when you need them.

So, before we go into tutorials, does it still have this limitation?

well, i understand, veeam let you down,
in the past, i did pay for veeam, ran into a nightmare where i could not restore a server, issues with gpt and mbr, but at that time, i got great support and a workaround.

i deal mostly with virtual servers, and as such, your problem does not apply to me.

but if you have moved onto another rock-solid backup solution, that has proved itself, please, do share.

i have customers i have setup using cloudberry and acronis.

one killer feature of veeam is that it has instant recovery and i have been able to do instant recovery with rclone. which is a testament to rclone and its mount feature.

1 Like

I took a quick look at Wasabi.
6 bucks a month, per TB. Not too bad.
I have a Spectrum connection.
20 megs up.
With network congestion, it's more like 13.
180 bucks a month, just for internet. 400/20.
The problem is on my end, as far as the small data pipe up.
It's a problem I don't really want to throw more money at, truth be told.
Spectrum is the only provider, over 1mbs in my area, so it's a like it ot lump it situation.

If they'd fix that ONE issue, I'd be good with it.
Macrium and Acronis will allow you to recover to any size drive, as long as the drive can hold the data. I wish / hope that, one day, Veeam will do the same.
I have yet to find a "perfect" solution, for anything. Thus the reason I came to rclone. Every other backup product failed to do what I needed it to do, straight out of the box, which was to make a backup, using Google File Sync, to a Google Team drive.
So, here we are. :smiley:

geez, that is a bummer. computational ghetto.
i need a beer.

at work and at home i have both optimum internet and verizon fios, dual wan fail over.

i know that some cloud providers let you mail them a physical hard drive, then they upload it to their cloud.
perhaps that might be an option for you.

Does Veeam community edition support doing "instant recovery" on an ESXI server?
I can do backups of the VM's OS, like say, Windows Server, or Pi-Hole, etc using Macrium or Acronis. Can you do a "bare metal" restore with Veeam, to restore the whole ESXI server, VM's and all?

well, if you restore to a virtual machine, you can size that virtual hard drive as large as you want.

have you checked out the free, awesome, super duper microsoft windows hyper-v server.
run your servers as virtual machines!

free as in beer!!!

i know nothing about vmware, see no reason for it.
got free hyper-v server from microsoft, not sure what vmware could do for me, except cost me much money.

i do instant recovery on hyper-v and i am sure you can do that with vmware, for free.

I've played with Hyper-V.
I've been using VMware on my home systems, and VirtualBox.
I like VM's... and have Pi-Hole in a HyperV on my gaming machine.
I've played with the Windows Subsystem for Linux, mainly using it for wget, and then I found that the way it "locks" the linux file system, you can't "easily" move or delete the files it saves in your home folder. Nothing like an unmovable clump of 500 GB's worth of PDFs from the eye, to slow the heck out of my 1TB ssd. lol.
Live and learn. :smiley: