Backup directly to gdrive mount using Veeam Agent

What is the problem you are having with rclone?

Not really a problem, it seems to work the first couple of tests. One of the backup files however first failed to upload due to "Context cancelled". My thought was that it was because it's a metadata file that changed after the upload was already queued. After a while it uploaded successfully though. I would just like some input on whether this is a bad idea or if there is any better way of doing it.

I am afraid I risk breaking the backup chain in case of network errors. Or maybe the cache will prevent that? My other idea was to backup to local disk and then do rclone copy to google drive. But I would like to backup directly to the mount if possible.

Thanks

Run the command 'rclone version' and share the full output of the command.

rclone v1.61.1

  • os/version: ubuntu 22.04 (64 bit)
  • os/kernel: 5.19.0-43-generic (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.19.4
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

Not a specific command but I will paste my systemd mount service

[Unit]
Description=RClone Service
Wants=network-online.target
After=network-online.target

[Service]
#Type=notify
#Environment=RCLONE_CONFIG=/opt/rclone/rclone.conf
#KillMode=none
#RestartSec=5
#ExecStart=/usr/bin/rclone mount gcrypt_direct: /data1/Allt --log-file /opt/rclone/logs/gcrypt_direct.log --allow-other
#ExecStop=/bin/fusermount -uz /data1/Allt
#Restart=on-failure
#User=olympen
#Group=olympen
ExecStart=/usr/bin/rclone mount \
  --config=/opt/rclone/rclone.conf \
  --log-level=INFO \
  --bwlimit 25M:off \
  --log-file=/opt/rclone/logs/rclone-mount.log \
  --umask=002 \
  --gid=1000 \
  --uid=1000 \
  --allow-other \
  --timeout=1h \
  --poll-interval=15s \
  --dir-cache-time=1000h \
  --cache-dir=/media/ssd/rclone_cache \
  --vfs-cache-mode=full \
  --vfs-cache-max-size=1500G \
  --vfs-cache-max-age=12h \
  --vfs-read-ahead=2G \
  gcrypt_direct: /data1/Allt
ExecStop=/bin/fusermount -uz /data1/Allt
Restart=on-abort
RestartSec=5
StartLimitInterval=60s
StartLimitBurst=3

[Install]
WantedBy=multi-user.target

The rclone config contents with secrets removed.

[gdrive]
type = drive
token = 
root_folder_id = root
client_id = 
client_secret = 
team_drive = 

[gcrypt_direct]
type = crypt
remote = gdrive:/gdrive/crypt
filename_encryption = obfuscate
directory_name_encryption = true
password = 
password2 = 

A log from the command with the -vv flag

2023/06/11 01:41:56 INFO  : Backup/Plexserver_veeambackup/olympen Daily_volume_backup/Daily_volume_backup.vbm.tmp: vfs cache: queuing for upload in 5s
2023/06/11 01:41:56 INFO  : Backup/Plexserver_veeambackup/olympen Daily_volume_backup/Daily_volume_backup.vbm.tmp: vfs cache: renamed in cache to "Backup/Plexserver_veeambackup/olympen Daily_volume_backup/Daily_volume_backup.vbm"
2023/06/11 01:42:02 INFO  : Backup/Plexserver_veeambackup/olympen Daily_volume_backup/Daily_volume_backup_2023-06-11T014202.vbk: vfs cache: queuing for upload in 5s
2023/06/11 01:42:02 INFO  : Backup/Plexserver_veeambackup/olympen Daily_volume_backup/Daily_volume_backup.vbm.tmp: vfs cache: queuing for upload in 5s
2023/06/11 01:42:02 ERROR : Backup/Plexserver_veeambackup/olympen Daily_volume_backup/Daily_volume_backup.vbm: Failed to copy: context canceled
2023/06/11 01:42:02 INFO  : Backup/Plexserver_veeambackup/olympen Daily_volume_backup/Daily_volume_backup.vbm: vfs cache: upload canceled
2023/06/11 01:42:02 INFO  : Backup/Plexserver_veeambackup/olympen Daily_volume_backup/Daily_volume_backup.vbm.tmp: vfs cache: renamed in cache to "Backup/Plexserver_veeambackup/olympen Daily_volume_backup/Daily_volume_backup.vbm"
2023/06/11 01:42:02 INFO  : Backup/Plexserver_veeambackup/olympen Daily_volume_backup/Daily_volume_backup.vbm.tmp: vfs cache: queuing for upload in 5s
2023/06/11 01:42:02 INFO  : Backup/Plexserver_veeambackup/olympen Daily_volume_backup/Daily_volume_backup.vbm.tmp: vfs cache: renamed in cache to "Backup/Plexserver_veeambackup/olympen Daily_volume_backup/Daily_volume_backup.vbm"
2023/06/11 01:42:09 INFO  : Backup/Plexserver_veeambackup/olympen Daily_volume_backup/Daily_volume_backup_2023-06-11T014202.vbk: Copied (new)
2023/06/11 01:42:09 INFO  : Backup/Plexserver_veeambackup/olympen Daily_volume_backup/Daily_volume_backup_2023-06-11T014202.vbk: vfs cache: upload succeeded try #1
2023/06/11 01:42:09 INFO  : Backup/Plexserver_veeambackup/olympen Daily_volume_backup/Daily_volume_backup.vbm: Copied (new)
2023/06/11 01:42:09 INFO  : Backup/Plexserver_veeambackup/olympen Daily_volume_backup/Daily_volume_backup.vbm: vfs cache: upload succeeded try #1

hello and welcome to the forum,

that is what i do with vagent. i prefer to have a local copy and not deal with rclone mount in the middle.
tho each location that i manage, has a veeam backup and replication server.

not sure what the .tmp file is about? as i would expect .vbm, .vib, .vbk

Hi, thanks! Long time lurker, figured it's about time to join the community :slight_smile:

I actually didn't notice the .tmp extention in the log until now. I guess vagent saves backup metadata to a temporary file during the backup and then append the new info to the actual vbm file after. In my mount there is only the .vbk and .vbm.

I have very little experience with vagent, but very much experience with Backup & Replication since the company I work at is a reseller. Vagent is extremely limited compared to B&R. Unfortunately I am currently too broke to invest in a VMware host and I run my Plex server on bare metal, so I have no Windows server to install Backup & Replication on.

Even if it works now I don't trust this setup so I might just go with local storage and copy the files..

agreed, i was going to mention that.....

can you run a windows vm on that bare metal host?
i run VBAR on the FREE, awesome, windows server 2019 hyperv edition.
free to install, free to run as many vm as you want, no licensing issues.

another option, that i once tested.
--- run VBAR inside a windows vm on hetzner and use storage box as a storage repository.
--- run vagent on your local machine.
--- connect the two machines using tailscale or openvpn

for a backup job, vagent needs access to all the files in the backup chain.
if you move each backup file to cloud, rclone would have to read from the rclone vfs file cache.
is that your plan?

I am running Linux on bare metal so I would have to use KVM in that case. I need to run bare metal in order for Plex to be able to do hardware transcoding. But I am running 30+ docker containers and it's just an old PC so I don't have enough CPU/memory to run a Windows VM currently.

My other plan was to backup to local disk and then copy the backup files to Dropbox, so I always have the backup files in two locations. Probably using a script, not sure if backup copy is an option in Veeam agent without license.

not sure that you need that. i do not use backup copy jobs with VBAR.

after the backup job has finished, my script run two rclone commands
rclone copy /path/to/repository remote: --immutable --include="*.{vib,vbk}"
rclone copy /path/to/repository remote: --include=*.vbm"

as i understand it, based on a question i once asked in the veaam forum.
there is not import backup, as with VBAR, that will stitch together the full backup chain of .vib + .vbk
so you can only recover from .vbk

It must be able to restore from incrementals (vib), otherwise it wouldn't be an option to configure incremental backup in the first place. The vbm file contains the information on how the restore points are linked. What about your immutable option, doesn't that break the backup chain? When an incremental backup becomes older than the retention period, that restore point (vib) is merged into the full backup file, then the vib is deleted.

If --immutable in rclone works how I think, it will not make any changes to the VBK when running the copy command. Wouldn't this break the chain?

agreed

this is my overall setup

  1. once VBAR writes a backup file to local repository, veeam will never touch that file again.
    if it does change, that would be bit-rot, ransomware, etc...
  2. create a vss snapshot of the local repository.
  3. using that snapshot as the source, rclone copy --immutable to wasabi
  4. using a cheap aws ec2 vm, rclone copy --immutable from wasabi to aws s3 deep glacier.

If you are doing incremental backups, the first ever backup will be a full one. Then an incremental backup will be taken every time it is run after that. When the amount of incremental backups exceeds the number of retentions, the full backup file is injected with the data from the oldest incremental file, and then the incremental is deleted. If these backup files are copied to cloud but the full backup file will never change after the first copy, the data from the deleted incremental will always be lost every time the backup is run? Or do I understand incorrectly?

The full backup file is being modified every time it exceeds the retention.

as i mentioned, i setup VBAR so that once a backup file is written it is never modified again, that applies to both .vbk and .vib.
i set the number of retention to max value of 730, which is 365*2

my script will trigger VBAR to run a particular backup job.
on a schedule, sometimes the backup is incremental, sometimes the backup is full.
in either case, once written, veeam will never modify the backup file.

that would be the same basic behavior as writing to external tape or burning to DVD.

hey, i was over at the veeam forum, and came across an old post where i make mention of how i use vagent.
and the difference between vagent on its own and vagent+vbar.

https://forums.veeam.com/veeam-agent-for-windows-f33/how-do-i-manually-remove-old-backups-from-the-veeam-free-edition-t69034.html#p490204

Hi,

Thanks. I actually ended up installing VBAR on my PC and the licensed agent on my other computers and my server. My company is a reseller of Veeam and I had a spare NFR license so I figured why not? :smiley:

I was thinking maybe getting a subscription for Wasabi S3 storage. In the newer versions Veeam has native support for S3 as backup repository with immutable option. 6$ a month for 1TB seems reasonable, I don't need more than that, at least not for offsite.

nice! i used that some years ago.

it is reasonable but keep in mind there is a 90 day retention period.

Apparently you can contact Wasabi support to get the retention lowered to 30 days if you are using Veeam.

Wasabi: Minimum storage duration policy | Veeam Community Resource Hub

doh, sorry, i should have mentioned that, as per my post

you can contact wasabi and they will reduce the retention period to 30 days, for all files, not just veeam files.

and from one of my wasabi accounts, from an billing invoice.
the 30 days is for the entire account, not just veeam files, not just certain file extensions.

and maybe something helpful in

TMP files are temporary files created by various software applications for different purposes. The file extension ".tmp" typically stands for "temporary" and can be used by multiple programs. However, since the purpose and format of TMP files can vary, there is no universal method to open them. Here are a few tips you can try:

  1. Rename the file: Change the file extension from ".tmp" to a format that is compatible with the software you think might have created it. For example, if it's an image file, you can try renaming it to ".jpg" or ".png" and then attempt to open it with an image viewer.
  2. Open with a text editor: TMP files are often plain text files. You can try opening the file using a simple text editor like Notepad (Windows) or TextEdit (Mac). While the content may appear as gibberish or system-related information, you might be able to glean some information from it.
  3. Use file recovery software: If the TMP file was accidentally deleted or the original program crashed before completing its task, you can try using file recovery software to restore the file. Tools like Recuva, EaseUS Data Recovery Wizard, or Disk Drill might be helpful in recovering deleted or lost TMP files.
  4. Contact the software/application developer: If you know which software or application created the TMP file, consider reaching out to the developer's support team. They may be able to provide guidance on how to open or recover the file, or they might have specific tools or instructions for dealing with TMP files created by their software.

Remember to exercise caution when dealing with TMP files, as they might contain sensitive information or be part of a temporary system process. Always ensure you have a backup of the original file before attempting any modifications or recovery efforts.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.