Metadata update Local -> S3

What is the problem you are having with rclone?

I am doing rclone sync -M from local to S3 (Oracle Cloud OSS). I can see the metadata added initially to the object. In cases where I edit the file or edit ownership/permission, a subsequent rclone sync updates the file just fine, but never updates the metadata. It is "stuck". For example I see this, even though I changed both owner and chmod. The file is correctly updated - however the owner and group ID are supposed to change to 1000 and the mode to 775.

opc-meta-atime:
 
2022-08-01T21:54:47.459292723Z
opc-meta-btime:
 
1970-01-01T00:00:00Z
opc-meta-gid:
 
0
opc-meta-mode:
 
100644
opc-meta-mtime:
 
1659716114.399951775
opc-meta-uid:
 
0

Run the command 'rclone version' and share the full output of the command.

Which cloud storage system are you using? (eg Google Drive)

OCI OSS with versioned bucket

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone sync -v --metadata --max-backlog 999999 --links --transfers=8 --checkers=16 /mnt/temp-backup/.snapshot/FSS-daily-Backup oci-oss:fss-filesystem1_backup/FSS-daily-Backup

The rclone config contents with secrets removed.

[oci-oss]
type = s3
provider = Other
access_key_id = xxx
secret_access_key = yyy=
endpoint = https://zzz.compat.objectstorage.us-ashburn-1.oraclecloud.com

A log from the command with the -vv flag

2022/08/05 17:54:08 DEBUG : file10m-8: Size and modification time the same (differ by 0s, within tolerance 1ns)
2022/08/05 17:54:08 DEBUG : file10m-8: Unchanged skipping

Metadata will only be synced if the file needs reuploading. Is that what you are seeing?

Yes - you are right - the file gets new metadata when uploaded again. I was not properly reloading the view I had, my apologies.

But in general a flag to allow metadata updates even if the file is unchanged. So if we chmod/chown and are doing backups. This would cause additional checks to the local file, but would keep those metadata in sync if changes to ACL occur.

Another topic but also how to get metadata from S3 to drive file restoration with time stamps, ownership, ACL - all of that is in the metadata. So when we do rclone sync FROM S3 to local, can we apply metadata as File ACL / owner / group. The data is there, but not sure we can use it.

This is possible of course, but I didn't want to add it for the initial metadata support.

It won't be possible in all backends to update medata without reuploading the file.

Not sure I understand you here.

Metadata should round trip from local to S3 to local just fine if you use the -M flag.

Maybe what I need isn’t clear. I have an NFSv3 mount(many) that I am using rclone to back up to object storage, and the -M puts metadata like mode and uid and gid there. Perfect. When copying back from S3 to local NFS I thought it would translate those metadata back into chown/chmod when restoring to the NFS drive. It copies the files fine, but the goal is to get back to whoever owned them and permissions. Let me know if that can happen.

Maybe there is an issue with Metadata coming back. Here is my setup - /mnt/filesystem1 is NFSv3 - Note files with different ownership and permissions:

[root@stuff filesystem1]# ll
total 33
-rwxrwxrwx. 1 root root  4 Aug  6 13:16 file777
-rw-r--r--. 2 root root  8 Aug  6 13:18 filedefault
-rw-r--r--. 1 opc  opc   9 Aug  6 13:17 filenonroot
-rw-r--r--. 2 root root  8 Aug  6 13:18 hardlink
lrwxrwxrwx. 1 root root 11 Aug  6 13:22 softlink -> filenonroot

Copy out to OCI OSS via S3 and look at headers:

[root@stuff oss]#  rclone sync -M --links /mnt/filesystem1 oci-oss:fss-filesystem1_backup/FSS-daily-Backup

[root@stuff oss]# oci os object head --bucket-name fss-filesystem1_backup --name FSS-daily-Backup/file777
{
  "accept-ranges": "bytes",
  "access-control-allow-credentials": "true",
  "access-control-allow-methods": "POST,PUT,GET,HEAD,DELETE,OPTIONS",
  "access-control-allow-origin": "*",
  "access-control-expose-headers": "accept-ranges,access-control-allow-credentials,access-control-allow-methods,access-control-allow-origin,content-length,content-md5,content-type,date,etag,last-modified,opc-client-info,opc-client-request-id,opc-meta-atime,opc-meta-btime,opc-meta-gid,opc-meta-mode,opc-meta-mtime,opc-meta-uid,opc-request-id,storage-tier,version-id,x-api-id",
  "content-length": "4",
  "content-md5": "pOdWCa/m8efuptIaFOLypg==",
  "content-type": "application/octet-stream",
  "date": "Sat, 06 Aug 2022 13:23:09 GMT",
  "etag": "be275502-63dd-4932-b21e-360f643ac96f",
  "last-modified": "Sat, 06 Aug 2022 13:22:23 GMT",
  "opc-client-request-id": "43738C11C87841E7AE5EC8BA896F3B1C",
  "opc-meta-atime": "2022-08-06T13:16:54.240223953Z",
  "opc-meta-btime": "1970-01-01T00:00:00Z",
  "opc-meta-gid": "0",
  "opc-meta-mode": "100777",
  "opc-meta-mtime": "1659791814.240223953",
  "opc-meta-uid": "0",
  "opc-request-id": "iad-1:uf-ATPQ9Tdz9j_-ILq_kR76p2Udg3bB_D2NCy_LvIV8ahMei6nsQhAWvSPamOK8k",
  "storage-tier": "Standard",
  "version-id": "6faa6528-fa70-47e6-a127-b072cd56f2c2",
  "x-api-id": "native"
}
[root@stuff oss]# oci os object head --bucket-name fss-filesystem1_backup --name FSS-daily-Backup/filenonroot
{
  "accept-ranges": "bytes",
  "access-control-allow-credentials": "true",
  "access-control-allow-methods": "POST,PUT,GET,HEAD,DELETE,OPTIONS",
  "access-control-allow-origin": "*",
  "access-control-expose-headers": "accept-ranges,access-control-allow-credentials,access-control-allow-methods,access-control-allow-origin,content-length,content-md5,content-type,date,etag,last-modified,opc-client-info,opc-client-request-id,opc-meta-atime,opc-meta-btime,opc-meta-gid,opc-meta-mode,opc-meta-mtime,opc-meta-uid,opc-request-id,storage-tier,version-id,x-api-id",
  "content-length": "9",
  "content-md5": "armNUk1yjiDZYocgzLbPLw==",
  "content-type": "application/octet-stream",
  "date": "Sat, 06 Aug 2022 13:23:36 GMT",
  "etag": "90441f0a-6a1e-4700-abe0-7b994efec762",
  "last-modified": "Sat, 06 Aug 2022 13:22:23 GMT",
  "opc-client-request-id": "56DE5084FC584DF895BE35A354203415",
  "opc-meta-atime": "2022-08-06T13:18:41.864397836Z",
  "opc-meta-btime": "1970-01-01T00:00:00Z",
  "opc-meta-gid": "1000",
  "opc-meta-mode": "100644",
  "opc-meta-mtime": "1659791846.073213546",
  "opc-meta-uid": "1000",
  "opc-request-id": "iad-1:keYSRC1wBNVW-ULOpWWcQGRtqICNMK-_EdcyV0-6EcZZOz51X72aaIGO9KEZ76yV",
  "storage-tier": "Standard",
  "version-id": "bccc4bcc-e5f7-4af6-bd8a-107781d7374a",
  "x-api-id": "native"
}

Remove the files and sync back:

[root@stuff oss]# rm -rf /mnt/filesystem1/*
[root@stuff oss]# rclone sync -M -v --links  oci-oss:fss-filesystem1_backup/FSS-daily-Backup /mnt/filesystem1
2022/08/06 13:24:21 ERROR : filenonroot: Failed to copy: failed to set metadata: failed to set xattr key "user.content-type": xattr.LSet /mnt/filesystem1/filenonroot user.content-type: operation not supported
2022/08/06 13:24:21 ERROR : hardlink: Failed to copy: failed to set metadata: failed to set xattr key "user.content-type": xattr.LSet /mnt/filesystem1/hardlink user.content-type: operation not supported
2022/08/06 13:24:21 ERROR : softlink.rclonelink: Failed to copy: failed to set metadata: failed to set xattr key "user.content-type": xattr.LSet /mnt/filesystem1/softlink user.content-type: operation not permitted
2022/08/06 13:24:21 ERROR : file777: Failed to copy: failed to set metadata: failed to set xattr key "user.content-type": xattr.LSet /mnt/filesystem1/file777 user.content-type: operation not supported
2022/08/06 13:24:21 ERROR : filedefault: Failed to copy: failed to set metadata: failed to set xattr key "user.content-type": xattr.LSet /mnt/filesystem1/filedefault user.content-type: operation not supported
2022/08/06 13:24:21 ERROR : Local file system at /mnt/filesystem1: not deleting files as there were IO errors
2022/08/06 13:24:21 ERROR : Local file system at /mnt/filesystem1: not deleting directories as there were IO errors
2022/08/06 13:24:21 ERROR : Attempt 1/3 failed with 5 errors and: failed to set metadata: failed to set xattr key "user.content-type": xattr.LSet /mnt/filesystem1/filedefault user.content-type: operation not supported
2022/08/06 13:24:21 INFO  : .snapshot: Removing directory
2022/08/06 13:24:21 INFO  : There was nothing to transfer
2022/08/06 13:24:21 ERROR : Attempt 2/3 succeeded
2022/08/06 13:24:21 INFO  :
Transferred:   	         40 B / 40 B, 100%, 0 B/s, ETA -
Checks:                 7 / 7, 100%
Deleted:                0 (files), 1 (dirs)
Elapsed time:         0.6s

Files are there ok, but no permissions, ownership that was in the metadata. Perhaps the xattrs isn't working for NFSv3?

[root@stuff oss]# ls -al /mnt/filesystem1
total 225
drwxr-xr-x. 2 root root      6 Aug  6 13:24 .
drwxr-xr-x. 5 root root     63 Aug  3 15:55 ..
-rw-r--r--. 1 root root      4 Aug  6 13:16 file777
-rw-r--r--. 1 root root      8 Aug  6 13:18 filedefault
-rw-r--r--. 1 root root      9 Aug  6 13:17 filenonroot
-rw-r--r--. 1 root root 177822 Aug  5 16:10 .FSS-daily-Backup-permissions.facl
-rw-r--r--. 1 root root      8 Aug  6 13:18 hardlink
drwxr-xr-x. 2 root root      0 Aug  6 13:24 .snapshot
lrwxrwxrwx. 1 root root     11 Aug  6 13:22 softlink -> filenonroot

Debug with -vvv

2022/08/06 13:25:00 DEBUG : fs cache: renaming cache item "/mnt/filesystem1" to be canonical "local{b6816}:/mnt/filesystem1"
2022/08/06 13:25:00 DEBUG : Local file system at /mnt/filesystem1: Waiting for checks to finish
2022/08/06 13:25:00 DEBUG : preAllocate: got error on fallocate, trying combination 1/2: operation not supported
2022/08/06 13:25:00 DEBUG : preAllocate: got error on fallocate, trying combination 2/2: operation not supported
2022/08/06 13:25:00 ERROR : file777: Failed to copy: failed to set metadata: failed to set xattr key "user.content-type": xattr.LSet /mnt/filesystem1/file777 user.content-type: operation not supported
2022/08/06 13:25:00 ERROR : softlink.rclonelink: Failed to copy: failed to set metadata: failed to set xattr key "user.content-type": xattr.LSet /mnt/filesystem1/softlink user.content-type: operation not permitted
2022/08/06 13:25:01 ERROR : filenonroot: Failed to copy: failed to set metadata: failed to set xattr key "user.content-type": xattr.LSet /mnt/filesystem1/filenonroot user.content-type: operation not supported

It certainly should do...

That is the goal of the implementation :slight_smile:

So it looks like setting the xattrs doesn't work on NFS and because those failed, rclone gave up on everything else.

I could fix this wit a flag on the local backend --no---xattrs say which stopped rclone trying to write the xattrs.

Or I could detect that the xattr write isn't supported - I think operation not supported is a standard unix error type I could detect, and I could just ignore that.

What do you think?

I haven’t thought too much about maintaining different metadata across backends. I get it that you might want to do that. But selfishly since I love the metadata preservation (saves me from doing getfacl/setfacl-single threaded and slow for huge file systems), I would go for a warning in the debug and ignore the errors. If you can detect that the destination doesn’t support xattrs, maybe say that in a warning up front and continue with what you can do with metadata and don’t try xattrs for the duration of operation. That would make our restore much easier.

Give this a go - it turns off xattr support if the filesystem ever returns ENOTSUP and prints one ERROR in the log about it.

v1.60.0-beta.6410.5feea6518.fix-xattr-notsup on branch fix-xattr-notsup (uploaded in 15-30 mins)

It appears to set the permissions and ownership, but I still see an ERROR for each file printed with just a single "-v":

2022/08/08 13:29:06 ERROR : Local file system at /mnt/filesystem1: xattrs not supported - disabling: xattr.LSet /mnt/filesystem1/10000files/file-1017 user.content-type: operation not supported

Ah, messed up the atomic operations...

Try this

v1.60.0-beta.6410.fdfa07be1.fix-xattr-notsup on branch fix-xattr-notsup (uploaded in 15-30 mins)

Looks good - I get a single ERROR and then everything works as designed. Thank you for the attention - Since I am not familiar with the release schedule and dealing with a production customer, I would prefer to have them wait until a new release with this fix.
Thank you again.

Thanks for testing.

I've merged this to master now which means it will be in the latest beta in 15-30 minutes and released in v1.60