Rclone sync C:/ Drive i/o errors and Access is denied

Okay, so I went back and looked over it again and the commands you listed. Correct me if I'm wrong but, you using a shadow drive/mount technically for the source /dest that are the same files and then only using this: command
--backup-dir=wasabi_en07datacrypt:en07data/kdbx/rclone/archive/20210522.122911 as a "hey merge me only don't worry about the destination" due to you removing the shadow mount: DeleteVShadowAfter=cmd /c rmdir b: So it doesn't have to copy/sync the old files. ONLY the new ones dedicated on the --backup-dir? after it runs through a checker to make sure everything looks right on the second pass.

Its showing will sync /path/to/local to remote:current, but for any files which would have been updated or deleted will be stored in remote:old. so would those be the only ones you would tell it to grab to then be allowed to move over to hot storage? Or do I have it backwards.

for each sync, remote:current will be a mirror of the source, older files will be in timestamped subfolders of remote:archive


this is a simplified version of the commands

this creates the vss snapshot
vshadow.exe -nw -script=setvar-vshadow.cmd -exec=exec.cmd c:

and this is the command that creates the mount point
mklink /D b:\mount\ \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy797

so a dir b:\mount would display the same output as dir c:\

and since i want to back up c:\data\keepass\database
the correct location inside the vss mount point is
b:\mount + \data\keepass\database
which inside the vss mountpoint it is
b:\mount\data\keepass\database

thus this command
rclone.exe sync "b:\mount\data\keepass\database" "remote:backup" --backup-dir=remote:archive


and these are the full commands

C:\data\rclone\scripts\vshadow.exe -nw -script=C:\data\rclone\logs\kdbx_files_wasabi_en07\20210522.122911\setvar-vshadow.cmd -exec=C:\data\rclone\logs\kdbx_files_wasabi_en07\20210522.122911\exec.cmd c:

mklink /D b:\mount\rcloner\kdbx_files_wasabi_en07_20210522.122911\ \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy797

rclone.exe sync "b:\mount\rcloner\kdbx_files_wasabi_en07_20210522.122911\data\keepass\database" "wasabi_en07datacrypt:en07data/kdbx/rclone/backup" --backup-dir=wasabi_en07datacrypt:en07data/kdbx/rclone/archive/20210522.122911 --stats=0 --fast-list --log-level=DEBUG --log-file=C:\data\rclone\logs\kdbx_files_wasabi_en07\20210522.122911\rclone.log --config=C:\data\rclone\scripts\wasabi.conf

Right. So after looking into this a bit more. The only thing I'm confused about is the --backup-dir=remote:archive. Is that so they can be 7z'ed. to then be chunk uploaded later instead of dealing with the small files? to set as $old and time-stampled ( Incremental backup), that's really the only purpose I can think of for having it there at-lest for a simple use case. So tell me if im wrong for that. But if I'm half right then would this be a valid use case? Minus the cold storage for now.

For the Veeam, I looked it up and its similar to the plucking feature of synology, with same integration as smart deploy server. not sure if this out of date or not but it explains a decent usecase similar to what I see people use synology drive for: Video

Veeam has a more use case for me though as you don't need a second box to run the recovery/restore image from. as you can strip the drivers and mount it to any new hardware (from what I saw atlest) So you can go from intel to AMD with no problem, I'd just have to run my preinstall script once for applications from winget, ninite, etc, install the base files to save time, back that up as a main veeam restore point, throw it in my non crypt mount so I wouldn't need rclone to decrypt it as it already would be encrypted from veeam, and just deploy that for fresh installs, just update it to new major windows updates every few months, and then run the file restore from the most recent shadow copy of c: that's in the rclone crypted mount? (remote:backup) or for my case it would be:

Pushd "F:\.Files\rclone\Users\PC\.rclone"
test&cls
ECHO Restoring From Backup Please Wait...
rclone copy "Backups:/Win10/Win10 Drive/" C:\ -L -P --ignore-case --log-info INFO --log-file "F:\.Files\rclone\Log Files\Main-CloudRestore.log" --buffer-size 256m --log-info INFO --log-file "F:\.files\rclone\Log Files\LocalMainRestore.log"
TIMEOUT /T 5 /NOBREAK
rclone copy "Backups:/Win10/Win10 Users/" C:\Users\ --include-from "F:\.Files\rclone\Users\PC\.rclone\Directory\C-Drive\Users-CloudRestore.txt" -L -P --ignore-case --buffer-size 256m --log-info INFO --log-file "F:\.files\rclone\Log Files\LocalMediaRestore.log"

(usually let it loop twice to make sure nothing got missed/missmatched)

That's about as simple as it would be for a reasonable use case (at-least for me) probably a lot simpler, but i'm just overexplaining. I understand you use the VSS to make a working copy/snapshot of the local disk to avoid any "working/locked" files so rclone can successfully back them up to the backup directory (main issue I was having causing I/O errors in OP), and then dump the deleted/overwritten files into the --backup-dir. (most likely for a labeled restore backup / pre-backup changes? just incase the backup fails to copy over. so you can move them back to fix any corruption/api caused errors to the backup itself? (I know you mentioned wasabi api isn't rated as harshly though. Or even at all.)

this is for incremental backups, let's use this command, not using vss snapshot, as an example
rclone.exe sync c:\source remote:backup --backup-dir=remote:archive

now, c:\source\file.txt has been modified and needs to be copied to remote:backup however file.txtexists inremote:backup``

  • without --backup-dir
    rclone will copy and overwrite remote:backup/file.txt
  • with --backup-dir
  1. move remote:backup/file.txt to remote:archive/file.txt
  2. copy c:\source\file.txt to remote:backup/file.txt

and if you add a timestamp remote:archive/20210522.122911

  1. move remote:backup/file.txt to remote:archive/20210522.122911/file.txt
  2. copy c:\source\file.txt to remote:backup/file.txt
    in effect, forever forward incremental backups.

the problem with synbox, is poor tech support and had to get replacement parts.
in a business environment that is a no go for me.
so i by servers from dell, with next day on-site tech support to replace any parts.
and use veeam, which has best tech support i ever used.
so for my personal usages, i use the free versions of veeam backup and replication on my home server and veeam agent on the windows computers running on the free edition of windows 2019 server.

veeam agent can restore to dissimilar hardware, including inject hardware drivers.
and the boot disc used windows PE, a stripped down version of windows os.

since each time i run rclone, there is a timestamped log file, my script scans the log file for error messages and then sends an email to me

For Veeam your using agent + Backup & Replication COMMUNITY EDITION ?

I looked into them both, however sadly veeam doesn't support gdrive/s3/wasabi/dropbox natively, did you mount it as a network share? Or just put it in c:/data and let rclone copy them over with a local backup of it on your home server incase of restoring? As my bare metal backup would be far above what I can throw on a usb drive as its around 500-600gbs for my c drive alone. I know because it uses windows PE I can just grab it off a local hard drive that's mounted, as cloud wouldn't be able to be mounted in that environment (to the best of my knowledge at-lest, haven't tried it yet)

I've used windows pe before, but I usually use an answer file to automate the actual install setup later on.

I see, so say for example (more of a theory/rant) bellow:
sync c:\data\brave\ backups:backup\data --backup-dir=backups:archive
data\brave\profile.ini was just updated because you changed a setting. The old profile.ini in backups:backup\data would then be moved to backups:archive\timestamp\data\brave\profile.ini after that move it would then and only then be overwritten by the c:\data\brave\profile.ini in backups:backup\data ? to keep a "history" but with only the files that changed would be in that one time stamp folder.

On to the theory well more of a rant I guess; trying to pick you brain on it for a more... business environment solution.

Do you have a "full backup" of that week for retention in that same archive or is there no need for it? Just to make sure it goes out a week so your not restoring more then is needed for that one week retention if you have some data loss? So for me it would be Sunday=full Mon-Friday=incremental Sat=differential of Mon-Sat.

It would hold 4 "full" backups per month Just incase your full backup is 2 months old, so It would require you to restore 60days of incremental backups instead couldn't you do it in differential so you would only have to restore full + 1 differential (latest differential) on-top of the full. IF you can set it up in that way. By adjusting the script to take from timestamp start to timestamp end in 6day blocks to merge those for differential. (copying m-f first replacing each new file overtop of it for the most recent version with unchanged files from Monday staying.)

correct

the paid for versions of vbar do support cloud but i do not use it.
vbar will use a local folder to store the backup files.
after veeam has finished the backup, it will run my script.
my script will

  1. create a vss mountpoint
  2. rclone copy --immutable from the mountpoint to the cloud.

correct

in my case, there are two corporate sites connected over a vpn.
each site has its own vbar server.
each vbar server will backup to its own repository and then backup to the other repository over the vpn.
latest set of full and incrementals goes to wasabi.
those files get deleted after 30 days, the minimum retention period at wasabi.

older backup files get archived to aws s3 deep glacier.

as for the .zip files, some of them get burned to blu-ray dvd and taken off-site.

i do not use differential backups.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.