Best setup to upload large Veeam files to a cloud

What is the problem you are having with rclone?

I am trying to figure out the best method to encrypt and upload >100 GB Veeam backup files. These files will change over time. I can't find a way to split them up without extra tools in between.

The 1st issue I have is that rclone will compute MD5 before uploading. This essentially doubles my upload time, because it will have to read the source file twice and that's my bottleneck.

The 2nd issue is syncing only changes. As far as I understand this is simply not a feature. Is that correct?

I tried to do PC->chunker->crypt->remote and PC->crypt->remote linking so far, both behave the same regarding md5. I just don't understand this well enough.

I also tried mount, but found that it requires a big cache and has known issues when mounted by SYSTEM user.

I could encrypt the backup with Veeam and then only encrypt file and folder names with rclone, if that would help.

I will gladly provide exact commands that I tried, config and logs, if necessary.

What is your rclone version (output from rclone version)

rclone: Version "v1.53.4"

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Windows 10 and/or Windows Server 2019

Which cloud storage system are you using? (eg Google Drive)

Jottacloud.

The command you were trying to run (eg rclone copy /tmp remote:tmp)

Paste here

The rclone config contents with secrets removed.

Paste config here

A log from the command with the -vv flag

Paste  log here

That's how rclone works as it needs to calculate the MD5SUM. If you want to skip it, that's always an option.

I think you mean what, block level changes? Rclone doesn't work at a block level and only transfers complete files.

hello and welcome to the forum,

rclone and veeam are a great combination.

i use rclone to upload veeam backup files to aws and wasabi.

here is how i do that.

  • with aws, i upload to deep glacier at $1.00/TB/month
  • with wasabi, a s3 hot storage clone, i keep the most recent full backup and its corresponding incrementals.
  • I enable versioning for both locations.

I have a local backup server that has all the veeam backups files.
i use VSS snapshots and rclone to upload those files to cloud.

if you enable veeam encryption, then the files are already encrypted, so i do not use a rclone crypt remote.

rclone uploads files, not blocks, so if a veeam file changed, rclone would have to reload the entire file.
so i setup veeam to never modify existing backup files. i use the forever forward full/incremental method
one weekly full and daily incremental.

yes, that is critical, to ensure that veeam backups files are uploaded and more importantly, when a disaster hits and you need to download the veeam files, that they downloaded without corruption.
my backup server uses the awesome, FREE windows server 2019 - hyperv edition, using REFS, windows version of ZFS.
it is equivalent to soft raid5, using slow spinning disks, add to that that REFS, checksums its own files,
now add rclone check-summing, and it does take long time. but it runs in the background, really does not matter.

the chunker is beta, i would never use that for critical veeam backup files.

i would not use a mount, there are good reasons not to.
but if you want to use a mount, no need to use a cache at all.

i mount as system user on a daily basis on multiple servers and computers.
what known issues?

i use two rclone commands.
one to upload the .vbm and one to upload the .vib/.vbk

C:\data\rclone\scripts\rclone.exe  copy  "b:\mount\rcloner\en07_aws01_iam_vserver03_veeam_br_en07_vbm_20210122.134541\BR_EN07"     "aws01_iam_vserver03_veeam_br_en07:vserver03.veeam.br.en07/en07/rclone/backup"  --stats=0 --fast-list --bind=192.168.62.233 --include=*.vbm   --log-level=DEBUG --log-file=C:\data\rclone\logs\en07_aws01_iam_vserver03_veeam_br_en07_vbm\20210122.134541\rclone.log
C:\data\rclone\scripts\rclone.exe  copy  "b:\mount\rcloner\en07_aws01_iam_vserver03_veeam_br_en07_20210122.134441\BR_EN07"     "aws01_iam_vserver03_veeam_br_en07:vserver03.veeam.br.en07/en07/rclone/backup"  --immutable --stats=0 --fast-list --bind=192.168.62.233 --s3-chunk-size=256M --s3-upload-concurrency=8 --exclude=*.vbm   --log-level=DEBUG --log-file=C:\data\rclone\logs\en07_aws01_iam_vserver03_veeam_br_en07\20210122.134441\rclone.log

let me know if you have questions,

1 Like

hi asdffdsa,
thank you.
That's a great write-up of how you do it. Thank you for sharing.
The MD5+upload duration becomes a problem if I try to merge oldest increment into the full backup, because then I have a huge file changing often. Doing weekly full + increments would increase my backup size a lot, but looks like there might not be any way around it.

I am experimenting, why not try chunker while at it? :slight_smile:

Regarding mount, as far as I understand, cache requirements depend on remote's capabilities, but see no need benefit in it anyway.

Regarding issues with SYSTEM and mount has this one: https://github.com/rclone/rclone/issues/2187 When I tried creating a Veeam backup repository in a mounted disk, 4 New Folders would appear and Veeam would throw weird errors. Also, for some reason, large files were not uploaded, but I did not look deeper into it. Probably I did something wrong, but did not look deeper into it.

Interesting approach with two commands. .vbm is chain metadata that changes with every increment and .vbk/.vib are backup files that do not and are created/deleted instead. Correct?

Does your setup protect you from ransomware?

with some servers i also do - a monthly full and daily incrementals.
it all depends on your use-case
for one server, i back-up on the operating system, then a file copy of shared folders.

thanks

i tried to have a veeam backup repository on a rclone mount.
did not work well, and too fragile to trust for critical backup files.

as for ransomware, sure, i would think that my overall solution does that.

  • i wrote a 400+ lines of python code to run all my backup needs.
    it uses a combination of Volume Snapshot Service, rclone, fastcopy, 7zip.
  • using rclone copy --immutable
  • using rclone sync with --backup-dir with timestamps to get forever forward incrementals.
C:\data\rclone\scripts\rclone.exe  sync  "b:\mount\rcloner\data_wasabi_sync+check_20210117.085703\data"     "wasabicrypten07data:en07data/data/rclone/backup"  --stats=0 --fast-list --progress --exclude-from=c:\data\rclone\scripts\rr\rr_data_wasabi\exclude.txt --backup-dir=wasabicrypten07data:en07data/data/rclone/archive/20210117.085703   --log-level=DEBUG --log-file=C:\data\rclone\logs\data_wasabi_sync+check\20210117.085703\rclone.log
  • using aws deep glacier for cheap storage, using service files, not id/secrets. and those service files are very limited to just uploading new files, no way to delete.
  • require MFA deletion and use versioning.
  • using wasabi hot storage, for most recent veeam backup files.
  • i have a blu-ray burner with 100GB discs, and cheap 4TB hard drives, to be taken off-site.
  • there is no end to it all.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.