Rclone fails to copy to GCS-bucket with retention policy

I use rclone (within Freenas nightly) to copy to a Google cloud storage bucket with a retention policy. The policy is used to keep some older, big files for a long time, while those files must not be deleted under any circumstances.

The copying process runs smoothly, but it ends with a "forbidden" error: "Failed to copy: googleapi: Error 403: Object '...' is subject to bucket's retention policy and can not be deleted, overwritten or archived until ... "forbidden".

Isn´t this the behavior of a sync? A copy would only have to skip the old files with the retention policy? Or do I understand the systematics wrong?

Sync makes Source and Destination the same so if a folder was deleted from the source, it would get deleted from the destination.

If you want to copy, it wouldn't delete at the end and you'd have to delete things based on your policies.

Maybe I put it wrong: I used copy, not sync. The behavior of sync was clear to me.

What is the full command you ran?

As I can see from the Freenas-logs:

'/usr/local/bin/rclone', '--config', '/tmp/tmp5opwemu4', '-v', '--stats', '1s', 'copy', '/path-to-be-copied', 'encrypted:/

So if you run that with -vv and share the output, we can figure it out.

I'd assume a copy is replacing a file and you get an error because of that if I had to guess.

Sorry, but I have no idea how to start the command manually with the -vv parameter in Freenas, the "cloud-sync" should still run on the Web GUI of Freenas (and not as cronjob set up via the command line)

So without any thing else, I would have to guess it's a new file with the same name trying to overwrite the old file and it's failing because of the policy.

So it seems to be doing the right thing and throwing an error based on your storage policy.

Ok, so we agree that the copy command in rclone does not try to change files that already exist in GCS and have an active retention policy?

If so, the flaw seems to lie with the freenas rclone implementation.

Sorry as that was not what I was saying.

If you copy a file up and you change the file after it's copied, it'll recopy the file up (overwriting the original file):

[felix@gemini ~]$ rclone copy blah GD: -vv
2019/08/12 16:07:08 DEBUG : rclone: Version "v1.48.0" starting with parameters ["rclone" "copy" "blah" "GD:" "-vv"]
2019/08/12 16:07:08 DEBUG : Using config file from "/opt/rclone/rclone.conf"
2019/08/12 16:07:09 DEBUG : blah: Couldn't find file - need to transfer
2019/08/12 16:07:10 INFO  : blah: Copied (new)
2019/08/12 16:07:10 INFO  :
Transferred:   	        15 / 15 Bytes, 100%, 9 Bytes/s, ETA 0s
Errors:                 0
Checks:                 0 / 0, -
Transferred:            1 / 1, 100%
Elapsed time:        1.6s

2019/08/12 16:07:10 DEBUG : 5 go routines active
2019/08/12 16:07:10 DEBUG : rclone: Version "v1.48.0" finishing with parameters ["rclone" "copy" "blah" "GD:" "-vv"]
[felix@gemini ~]$ cat /etc/hosts >>blah
[felix@gemini ~]$ rclone copy blah GD: -vv
2019/08/12 16:07:18 DEBUG : rclone: Version "v1.48.0" starting with parameters ["rclone" "copy" "blah" "GD:" "-vv"]
2019/08/12 16:07:18 DEBUG : Using config file from "/opt/rclone/rclone.conf"
2019/08/12 16:07:19 DEBUG : blah: Sizes differ (src 258 vs dst 15)
2019/08/12 16:07:19 INFO  : blah: Copied (replaced existing)
2019/08/12 16:07:19 INFO  :
Transferred:   	       258 / 258 Bytes, 100%, 217 Bytes/s, ETA 0s
Errors:                 0
Checks:                 0 / 0, -
Transferred:            1 / 1, 100%
Elapsed time:        1.1s

2019/08/12 16:07:19 DEBUG : 5 go routines active
2019/08/12 16:07:19 DEBUG : rclone: Version "v1.48.0" finishing with parameters ["rclone" "copy" "blah" "GD:" "-vv"]

Here is an example of it in action.

If not changed, it'll skip the file:

2019/08/12 16:08:20 DEBUG : rclone: Version "v1.48.0" starting with parameters ["rclone" "copy" "blah" "GD:" "-vv"]
2019/08/12 16:08:20 DEBUG : Using config file from "/opt/rclone/rclone.conf"
2019/08/12 16:08:20 DEBUG : blah: Size and modification time the same (differ by -914.875µs, within tolerance 1ms)
2019/08/12 16:08:20 DEBUG : blah: Unchanged skipping
2019/08/12 16:08:20 INFO  :
Transferred:   	         0 / 0 Bytes, -, 0 Bytes/s, ETA -
Errors:                 0
Checks:                 1 / 1, 100%
Transferred:            0 / 0, -
Elapsed time:       300ms

2019/08/12 16:08:20 DEBUG : 5 go routines active
2019/08/12 16:08:20 DEBUG : rclone: Version "v1.48.0" finishing with parameters ["rclone" "copy" "blah" "GD:" "-vv"]

I already understood that: a file that has not been changed will be skipped, a changed file will be copied. But that's not what rclone does in Freenas: it does not skip unmodified files, but tries to copy/upload them again.

The retention policy in GCS may prevent feedback about the "correct" file content to rclone.

Unfortunately, we're back at you need to share a log with -vv as that should not be the case.

Are you not able to run a command from a CLI or something?

The -vv will show specifically why a file is being copied. It can be uploaded based on a few things like size, modification time, etc.

Ok, I'll try that soon. Thank you so far for your quick help.

I tried to set up rclone in the Freenas-CLI but when using autoconfig to GCS I got an error message in the browser:

Sign in with Google temporarily disabled for this app

This app has not been verified yet by Google in order to use Google Sign In.

The mistake was in front of the computer :wink:

I now found out that the error was based on a single file, a veeam backup chain metadata file, which is overwritten after every backup made by veeam.

So that was the one which obviously is changed daily and so never reaches the end of its retention time.

Thanks again for your kind help.

You'll have to use the latest beta to work around this. See https://github.com/rclone/rclone/issues/3372 for why!

Maybe this is a bit OT, but how do I logically copy backups that have only one changing file with the same name (ie the above backup chain file, the rest of the files change name every day) to online storage with a retention policy, without rclone issuing errors?

Actually, I'm looking for a parameter that instructs rclone to upload a copy of this file with the same file name?

Or does someone think of another possibility?

The goal is to prevent daily backups in the online store from being deleted by anyone (even the owner)

I think you are probably looking for either --copy-dest or --compare-dest - they will allow you to make a non-sparse/sparse backup of only the changed files without changing the current backup.

You'll need the beta for both of those.

I probably do not quite understand the parameters yet, but I do not think that solves the problem of the "veeam backup chain metadata" files. The file has the same name after each backup, but changes.

How can rclone back up these changed files to immutable storage?

By backing them up to a different directory. So if you use --compare-dest like this

copy /path/to/backup --compare-dest remote:original-backup remote:new-backup

rclone will compare any files it is copying with original-backup and if they are different store them in new-backup. So original-backup is immutable and any differences are stored in new-backup.