What is the problem you are having with rclone?
I am easily able to mount S3 bucket drive on EC2 instance with
.\rclone.exe mount us-east-1-is-dev-database:us-east-1-is-dev-database T: --vfs-cache full
All is good when coying files from local drive to T drive and vice versa
When I try to use the T drive for sql server backup it doesnt recognize the mounted drive
backup database xxxx to disk = 'T:\xxx.bak
Run the command 'rclone version' and share the full output of the command.
Which cloud storage system are you using? (eg Google Drive)
The command you were trying to run (eg
rclone copy /tmp remote:tmp)
Paste command here
The rclone config contents with secrets removed.
Paste config here
type = s3
provider = AWS
env_auth = false
access_key_id = xxxxxxxxx
secret_access_key = yyyyyyyyyyy
region = us-east-1
acl = private
storage_class = INTELLIGENT_TIERING
A log from the command with the
Paste log here
hello and welcome to the forum,
i see that the ec2 instance is running on windows.
make sure the username that is running the backup is the same as the username running the mount.
if both are the same username, need to run both commands together, with or without elevated privileges.
--- running the
rclone mount as system user, using psexec, windows task scheduler, nssm, etc...
--- mounting to a folder, not a drive letter. windows handles that a bit differently.
We dont login using user account but role based
And how do we run both commands together
sure, in a batch file dos/powershell.
just curious, what is the size of the .bak file?
About 400GB. It could be as high as 1TB
fwiw, as far as i know,
there is always going to be a local temp copy of the .bak file using
mssql->T:->rclone local vfs file cache -> moved to cloud
if correct, then why not
--- backup to local
rclone move the .bak
side note: mssql server has an option to backup to azure,
no idea if that is direct to cloud
a local .bak is created and then moved to azure or direct to cloud.
My goal is to backup straight to S3 without creating a local copy. There are tools I have tried like TNTdrive. Trying to see if we can do the same with rclone
so tntdrive does not locally cache the .bak file before upload starts?
how does tntdrive verify the file transfer, md5 hash or what?
as a test, try without the vfs file cache, in effect
if a process requests something rclone cannot do, the debug log would show that.
ERROR : file.ext: WriteFileHandle: Truncate: Can't change size without --vfs-cache-mode >= writes
ERROR : file.ext: WriteFileHandle: ReadAt: Can't read and write to file without --vfs-cache-mode >= minimal
what is the exact full command you are using?
BACKUP DATABASE xxxx TO DISK = ''T:\XXXX.BAK'
any commment on this please?
With TNTdrive, we simply give the bucket name and credentials and assign a drive letter
Thats it. The drive is treated and looks like any other drive
You can download TNTdrive trial version from TNTdrive.com. and see how it works
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.