Invalid metadata key names result in a failure to transfer. xattr results in failure to upload: net/http: invalid header field value for "X-Amz-Meta-Samba_pai"

What is the problem you are having with rclone?

When copying files with metadata that likely originated on windows, from local to aws s3, files may have the xattr user.SAMBA_PAI set.

EDIT: turns out it has nothing to do with the metadata key name, rather, if the xattr has a base64 value, the transfer will fail. please see my reply below

When rclone enounters this, it attempts to create an Amazon S3 metadata called ""X-Amz-Meta-Samba_pai" which appears to be not an acceptable key name. If rclone is really attempting to use the upper-case X-Amz-Meta- as seen in the error output, this would not conform to the s3 naming scheme, as S3 requires it to be lowercase ie x-amz-meta-....

ex:

2022/10/06 15:35:40 ERROR : varidata/research/projects/a/w/c.pptx: Failed to copy: Put "https://**.s3.us-east-2.amazonaws.com/varidata/research/projects/a/w/c.pptx": net/http: invalid header field value for "X-Amz-Meta-Samba_pai"

we can see the xattr is set on the file:

[root@login01 usageReport]# getfattr -d "/varidata/research/projects/a/w/c.pptx"
getfattr: Removing leading '/' from absolute path names
# file: varidata/research/projects/a/w/c.pptx
user.SAMBA_PAI=0sAgSQCQAAAAABwdaSUAAA6CCTUAAA6CCTUAABiiOTUAABiSOTUAABFQaTUAABJwSTUAABwdaSUAAC/////w==

If the root cause is as simple as fixing the capitalization that would hopefully be straight forward.

Run the command 'rclone version' and share the full output of the command.

[root@almatest bwmonitor]# /varidata/research/software/rclone/rclone-v1.60.0-beta.6462.7e547822d-linux-amd64/rclone --version
rclone v1.60.0-beta.6462.7e547822d
- os/version: almalinux 9.0 (64 bit)
- os/kernel: 5.14.0-70.26.1.el9_0.x86_64 (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.19.1
- go/linking: static
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

AWS S3

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone-v1.60.0-beta.6462.7e547822d-linux-amd64/rclone copyto -v -M --buffer-size 16M --transfers 4 --s3-chunk-size 128M --s3-upload-cutoff 4096M --s3-upload-concurrency 8 --s3-storage-class DEEP_ARCHIVE --files-from-raw newlist.gz.addedOrModifiedFiles / aws:vai-___-backup

The rclone config contents with secrets removed.

[aws]
type = s3
provider = AWS
access_key_id = *
secret_access_key = *
region = us-east-2
location_constraint = us-east-2
acl = private

A log from the command with the -vv flag

2022/10/06 15:35:40 ERROR : varidata/research/projects/a/w/c.pptx: Failed to copy: Put "https://**.s3.us-east-2.amazonaws.com/varidata/research/projects/a/w/c.pptx": net/http: invalid header field value for "X-Amz-Meta-Samba_pai"

I was able to spend more time troubleshooting and can recreate the error:
per the man page:

There are three methods available for encoding the value. If the given string is enclosed in double quotes, the inner string is treated as text. In that case, backslashes and double quotes have special meanings and need to be escaped by a preceding backslash. Any control characters can be encoded as a backslash followed by three digits as its ASCII code in octal. If the given string begins with 0x or 0X, it expresses a hexadecimal number. If the given string begins with 0s or 0S, base64 encoding is expected

rclone fails when the attr has a base64 value

ex:

#set a base64 xattr
[root@almatest xattrTest]# echo hello > test1.txt
[root@almatest xattrTest]# setfattr -n user.myattr -v 
0sAgSQCwAAAAABwdaSUAAA6CCTUAAA6CCTUAAApBuTUAABwdaSUAABiiOTUAABiSOTUAABFQaTUAABJwSTUAAB6wGTUAAC/////w== test1.txt

#set a string xattr
[root@almatest xattrTest]# echo hello > test2.txt
[root@almatest xattrTest]# setfattr -n user.myattr -v sometextvalue test2.txt

now when we upload, the string attr works, but we error on the file that had the base64 attr.

[root@almatest research]# /varidata/research/software/rclone/rclone-v1.60.0-beta.6462.7e547822d-linux-amd64/rclone copyto -M -vv --retries 1 xattrTest --dump bodies  aws:vai-fmdf-backup/
2022/10/06 22:19:04 DEBUG : rclone: Version "v1.60.0-beta.6462.7e547822d" starting with parameters ["/varidata/research/software/rclone/rclone-v1.60.0-beta.6462.7e547822d-linux-amd64/rclone" "copyto" "-M" "-vv" "--retries" "1" "xattrTest" "--dump" "bodies" "aws:vai-fmdf-backup/"]
2022/10/06 22:19:04 DEBUG : Creating backend with remote "xattrTest"
2022/10/06 22:19:04 DEBUG : Using config file from "/root/.config/rclone/rclone.conf"
2022/10/06 22:19:04 DEBUG : fs cache: renaming cache item "xattrTest" to be canonical "/varidata/research/xattrTest"
2022/10/06 22:19:04 DEBUG : Creating backend with remote "aws:vai-fmdf-backup/"
2022/10/06 22:19:04 DEBUG : You have specified to dump information. Please be noted that the Accept-Encoding as shown may not be correct in the request and the response may not show Content-Encoding if the go standard libraries auto gzip encoding was in effect. In this case the body of the request will be gunzipped before showing it.
2022/10/06 22:19:04 DEBUG : You have specified to dump information. Please be noted that the Accept-Encoding as shown may not be correct in the request and the response may not show Content-Encoding if the go standard libraries auto gzip encoding was in effect. In this case the body of the request will be gunzipped before showing it.
2022/10/06 22:19:04 DEBUG : fs cache: renaming cache item "aws:vai-fmdf-backup/" to be canonical "aws:vai-fmdf-backup"
2022/10/06 22:19:04 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2022/10/06 22:19:04 DEBUG : HTTP REQUEST (req 0xc0007a4200)
2022/10/06 22:19:04 DEBUG : GET /?delimiter=%2F&encoding-type=url&list-type=2&max-keys=1000&prefix= HTTP/1.1
Host: vai-fmdf-backup.s3.us-east-2.amazonaws.com
User-Agent: rclone/v1.60.0-beta.6462.7e547822d
Authorization: XXXX
X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
X-Amz-Date: 20221007T021904Z
Accept-Encoding: gzip

2022/10/06 22:19:04 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2022/10/06 22:19:04 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2022/10/06 22:19:04 DEBUG : HTTP RESPONSE (req 0xc0007a4200)
2022/10/06 22:19:04 DEBUG : HTTP/1.1 200 OK
Transfer-Encoding: chunked
Content-Type: application/xml
Date: Fri, 07 Oct 2022 02:19:05 GMT
Server: AmazonS3
X-Amz-Bucket-Region: us-east-2
X-Amz-Id-2: S4OYpBdOSag84VJJwTptKJdgupbj1lesmKCERAsWIaEgQd5lbwDUgfCjqG2fpNBXjsqIsOhORbE=
X-Amz-Request-Id: Z4NW7VBVC5EWVJEP

27e
<?xml version="1.0" encoding="UTF-8"?>
<ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Name>vai-fmdf-backup</Name><Prefix></Prefix><KeyCount>3</KeyCount><MaxKeys>1000</MaxKeys><Delimiter>/</Delimiter><EncodingType>url</EncodingType><IsTruncated>false</IsTruncated><Contents><Key>test4.txt</Key><LastModified>2022-10-07T02:09:54.000Z</LastModified><ETag>&quot;b1946ac92492d2347c6235b4d2611184&quot;</ETag><Size>6</Size><StorageClass>STANDARD</StorageClass></Contents><CommonPrefixes><Prefix>backup_logs/</Prefix></CommonPrefixes><CommonPrefixes><Prefix>varidata/</Prefix></CommonPrefixes></ListBucketResult>
0

2022/10/06 22:19:04 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2022/10/06 22:19:04 DEBUG : S3 bucket vai-fmdf-backup: Waiting for checks to finish
2022/10/06 22:19:04 DEBUG : S3 bucket vai-fmdf-backup: Waiting for transfers to finish
2022/10/06 22:19:04 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2022/10/06 22:19:04 DEBUG : HTTP REQUEST (req 0xc0007a4400)
2022/10/06 22:19:04 DEBUG :
2022/10/06 22:19:04 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2022/10/06 22:19:04 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2022/10/06 22:19:04 DEBUG : HTTP RESPONSE (req 0xc0007a4400)
2022/10/06 22:19:04 DEBUG : Error: net/http: invalid header field value for "X-Amz-Meta-Myattr"
2022/10/06 22:19:04 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2022/10/06 22:19:04 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2022/10/06 22:19:04 DEBUG : HTTP REQUEST (req 0xc000399700)
2022/10/06 22:19:04 ERROR : test1.txt: Failed to copy: Put "https://vai-fmdf-backup.s3.us-east-2.amazonaws.com/test1.txt": net/http: invalid header field value for "X-Amz-Meta-Myattr"
2022/10/06 22:19:04 DEBUG : PUT /test2.txt HTTP/1.1
Host: vai-fmdf-backup.s3.us-east-2.amazonaws.com
User-Agent: rclone/v1.60.0-beta.6462.7e547822d
Content-Length: 6
Authorization: XXXX
Content-Md5: sZRqySSS0jR8YjW00mERhA==
Content-Type: text/plain; charset=utf-8
X-Amz-Acl: private
X-Amz-Content-Sha256: UNSIGNED-PAYLOAD
X-Amz-Date: 20221007T021904Z
X-Amz-Meta-Atime: 2022-10-06T22:12:35.829898-04:00
X-Amz-Meta-Btime: 1969-12-31T19:00:00-05:00
X-Amz-Meta-Gid: 0
X-Amz-Meta-Mode: 100644
X-Amz-Meta-Mtime: 1665108755.830135
X-Amz-Meta-Myattr: sometextvalue
X-Amz-Meta-Uid: 0
Accept-Encoding: gzip

hello
2022/10/06 22:19:04 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2022/10/06 22:19:04 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2022/10/06 22:19:04 DEBUG : HTTP RESPONSE (req 0xc000399700)
2022/10/06 22:19:04 DEBUG : HTTP/1.1 200 OK
Content-Length: 0
Date: Fri, 07 Oct 2022 02:19:05 GMT
Etag: "b1946ac92492d2347c6235b4d2611184"
Server: AmazonS3
X-Amz-Id-2: Hp3fjSyUGiYDurOILtnFCxdgS1i+scD/nhJvhGXKvJWn/HC93p+nmKWA650loXXI0wyAgfTYTgg=
X-Amz-Request-Id: Z4NWEE415CXGHR36
X-Amz-Server-Side-Encryption: AES256
X-Amz-Version-Id: tV9X5U31LWGIXwikpuDJFNtfUNoRwCMq

2022/10/06 22:19:04 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2022/10/06 22:19:04 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2022/10/06 22:19:04 DEBUG : HTTP REQUEST (req 0xc0002b6f00)
2022/10/06 22:19:04 DEBUG : HEAD /test2.txt?versionId=tV9X5U31LWGIXwikpuDJFNtfUNoRwCMq HTTP/1.1
Host: vai-fmdf-backup.s3.us-east-2.amazonaws.com
User-Agent: rclone/v1.60.0-beta.6462.7e547822d
Authorization: XXXX
X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
X-Amz-Date: 20221007T021904Z

2022/10/06 22:19:04 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2022/10/06 22:19:04 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2022/10/06 22:19:04 DEBUG : HTTP RESPONSE (req 0xc0002b6f00)
2022/10/06 22:19:04 DEBUG : HTTP/1.1 200 OK
Content-Length: 6
Accept-Ranges: bytes
Content-Type: text/plain; charset=utf-8
Date: Fri, 07 Oct 2022 02:19:05 GMT
Etag: "b1946ac92492d2347c6235b4d2611184"
Last-Modified: Fri, 07 Oct 2022 02:19:05 GMT
Server: AmazonS3
X-Amz-Id-2: +NKki3fAw3RUZ3xEf/C3SebowlNgIMA1oa4va6ITy/WWQpLQTDg99JyJtUQtrJuZso8DQDlKpio=
X-Amz-Meta-Atime: 2022-10-06T22:12:35.829898-04:00
X-Amz-Meta-Btime: 1969-12-31T19:00:00-05:00
X-Amz-Meta-Gid: 0
X-Amz-Meta-Mode: 100644
X-Amz-Meta-Mtime: 1665108755.830135
X-Amz-Meta-Myattr: sometextvalue
X-Amz-Meta-Uid: 0
X-Amz-Request-Id: Z4NVMDTDFBXM0AG0
X-Amz-Server-Side-Encryption: AES256
X-Amz-Version-Id: tV9X5U31LWGIXwikpuDJFNtfUNoRwCMq

2022/10/06 22:19:04 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2022/10/06 22:19:04 DEBUG : test2.txt: md5 = b1946ac92492d2347c6235b4d2611184 OK
2022/10/06 22:19:04 INFO  : test2.txt: Copied (new)
2022/10/06 22:19:04 ERROR : Attempt 1/1 failed with 1 errors and: Put "https://vai-fmdf-backup.s3.us-east-2.amazonaws.com/test1.txt": net/http: invalid header field value for "X-Amz-Meta-Myattr"
2022/10/06 22:19:04 INFO  :
Transferred:   	         12 B / 12 B, 100%, 0 B/s, ETA -
Errors:                 1 (retrying may help)
Transferred:            1 / 1, 100%
Elapsed time:         0.3s

2022/10/06 22:19:04 DEBUG : 5 go routines active
2022/10/06 22:19:04 Failed to copyto: Put "https://vai-fmdf-backup.s3.us-east-2.amazonaws.com/test1.txt": net/http: invalid header field value for "X-Amz-Meta-Myattr"

What is happening here is that the attributes are being set with binary data in and when rclone reads them and tries to send them to AWS, the net/http library is complaing that header values with control characters aren't valid.

You can see that the attribute is stored as binary here

$ strace getfattr test1.txt -n user.myattr
...
getxattr("test1.txt", "user.myattr", "\2\4\220\v\0\0\0\0\1\301\326\222P\0\0\350 \223P\0\0\350 \223P\0\0\244\33\223P", 256) = 73

The aws docs say

User-defined metadata is a set of key-value pairs. Amazon S3 stores user-defined metadata keys in lowercase.

Amazon S3 allows arbitrary Unicode characters in your metadata values.

To avoid issues around the presentation of these metadata values, you should conform to using US-ASCII characters when using REST and UTF-8 when using SOAP or browser-based uploads via POST.

When using non US-ASCII characters in your metadata values, the provided unicode string is examined for non US-ASCII characters. Values of such headers are character decoded as per RFC 2047 before storing and encoded as per RFC 2047 to make them mail-safe before returning. If the string contains only US-ASCII characters, it is presented as is.

The following is an example.

PUT /Key HTTP/1.1
Host: awsexamplebucket1.s3.amazonaws.com
x-amz-meta-nonascii: ÄMÄZÕÑ S3

HEAD /Key HTTP/1.1
Host: awsexamplebucket1.s3.amazonaws.com
x-amz-meta-nonascii: =?UTF-8?B?w4PChE3Dg8KEWsODwpXDg8KRIFMz?=

PUT /Key HTTP/1.1
Host: awsexamplebucket1.s3.amazonaws.com
x-amz-meta-ascii: AMAZONS3

HEAD /Key HTTP/1.1
Host: awsexamplebucket1.s3.amazonaws.com
x-amz-meta-ascii: AMAZONS3
		

So rclone could detect that the value contains non ascii charcters and encode it. I would have thought this is something the SDK should be doing for me.

I'm not sure that S3 really does accept control characters as metatdata values I haven't figured that out yet! And I need to read through the code of the SDK to see whether it is doing any decoding of metadata values for me.

So I see 3 possible fixes for this

  1. Encode the binary data so that S3 accepts it. This might not work, especially if the data is invalid UTF-8 (likely) though the docs say arbitrary unicode, so potentially you could set an encoding of latin1 to pass arbitrary binary data.
  2. Get the attribute reading part of rclone to encode the attributes in the same way getfattr does so that they become printable values. This is probably a reasonable idea modulo what if someone has attribues starting 0s. Maybe it needs to be an option
  3. Drop the binary attributes with a log message (in the s3 backend maybe).

This needs more research, but I'd like your input as to which solution you think is the best?

Yes, I would have thought the SDK would confirm/validate it's data itself.

Fundamentally though, I think we'll have to concede that there will be cases were there is no 1to1 exact mapping of local metadata to remote metadata, and that not everything can be shoehorned. In this particular case, option #2 provides a workaround for now, but I think #3 may be a better long term approach to non-compliant metadata.

As an example, we know that the AWS S3 has a 2KB limit for the sum of all user supplied metadata in the header. Yet locally, a single xattr value can happily be up to 64KB. In this scenario, I don't see any possibility aside from dropping the attributing and logging the warning. The worst outcome for me would be skipping the file altogether, I'd much rather lose a bit of metadata detail vs losing the data itself.

So basically, my vote would be for #3 as a long term approach for this and future metadata non-portability situations.

I agree with your philosophy of getting the data backed up and the metadata if possible.

How would you want to configure dropping binary attributes?

This could be

  • automatically detected by the s3 backend and dropped with a warning
  • configured explicitly to say --metadata-delete attribute-name
  • configured with --metadata-no-binary or similar

We don't currently do anything about the s3 size limits for metadata - we probably should, but s3 will give an error if they are exceeded.

I think handling it in the backend would be the ideal, since every backend provider may have its own quirks that need handling. In this case I would drop with warning.

I had a go at this here - please give it a go!

v1.60.0-beta.6481.5d2c36222.fix-s3-metadata on branch fix-s3-metadata (uploaded in 15-30 mins)

My tests show the fix works as intended and does indeed drop the offending metadata while still uploading the object and other metadata

#TEST CASE

[root@almatest xattrtest]# echo hello > test1.txt
[root@almatest xattrtest]# echo hello > test2.txt
[root@almatest xattrtest]# echo hello > test3.txt
[root@almatest xattrtest]# setfattr -n user.myattr -v 0sAgSQCwAAAAABwdaSUAAA6CCTUAAA6CCTUAAApBuTUAABwdaSUAABiiOTUAABiSOTUAABFQaTUAABJwSTUAAB6wGTUAAC/////w== test1.txt
[root@almatest xattrtest]# setfattr -n user.myattr -v sometextvalue test2.txt

#ORIGINAL VERSION


[root@almatest zack.ramjan]# /varidata/research/software/rclone/rclone-v1.60.0-beta.6462.7e547822d-linux-amd64/rclone -vv copyto -M  xattrtest aws:vai-gggg-backup/xattrtest
2022/10/12 13:57:20 DEBUG : rclone: Version "v1.60.0-beta.6462.7e547822d" starting with parameters ["/varidata/research/software/rclone/rclone-v1.60.0-beta.6462.7e547822d-linux-amd64/rclone" "-vv" "copyto" "-M" "xattrtest" "aws:vai-gggg-backup/xattrtest"]
2022/10/12 13:57:20 DEBUG : Creating backend with remote "xattrtest"
2022/10/12 13:57:20 DEBUG : Using config file from "/root/.config/rclone/rclone.conf"
2022/10/12 13:57:20 DEBUG : fs cache: renaming cache item "xattrtest" to be canonical "/varidata/researchtemp/hpctmp/zack.ramjan/xattrtest"
2022/10/12 13:57:20 DEBUG : Creating backend with remote "aws:vai-gggg-backup/xattrtest"
2022/10/12 13:57:20 DEBUG : S3 bucket vai-gggg-backup path xattrtest: Waiting for checks to finish
2022/10/12 13:57:20 DEBUG : S3 bucket vai-gggg-backup path xattrtest: Waiting for transfers to finish
2022/10/12 13:57:20 ERROR : test1.txt: Failed to copy: Put "https://vai-gggg-backup.s3.us-east-2.amazonaws.com/xattrtest/test1.txt": net/http: invalid header field value for "X-Amz-Meta-Myattr"
2022/10/12 13:57:20 DEBUG : test2.txt: md5 = b1946ac92492d2347c6235b4d2611184 OK
2022/10/12 13:57:20 INFO  : test2.txt: Copied (new)
2022/10/12 13:57:20 DEBUG : test3.txt: md5 = b1946ac92492d2347c6235b4d2611184 OK
2022/10/12 13:57:20 INFO  : test3.txt: Copied (new)
2022/10/12 13:57:20 ERROR : Attempt 1/3 failed with 1 errors and: Put "https://vai-gggg-backup.s3.us-east-2.amazonaws.com/xattrtest/test1.txt": net/http: invalid header field value for "X-Amz-Meta-Myattr"
2022/10/12 13:57:20 DEBUG : S3 bucket vai-gggg-backup path xattrtest: Waiting for checks to finish
2022/10/12 13:57:20 ERROR : test1.txt: Failed to copy: Put "https://vai-gggg-backup.s3.us-east-2.amazonaws.com/xattrtest/test1.txt": net/http: invalid header field value for "X-Amz-Meta-Myattr"
2022/10/12 13:57:20 DEBUG : test3.txt: Size and modification time the same (differ by 0s, within tolerance 1ns)
2022/10/12 13:57:20 DEBUG : test3.txt: Unchanged skipping
2022/10/12 13:57:20 DEBUG : test2.txt: Size and modification time the same (differ by 0s, within tolerance 1ns)
2022/10/12 13:57:20 DEBUG : test2.txt: Unchanged skipping
2022/10/12 13:57:20 DEBUG : S3 bucket vai-gggg-backup path xattrtest: Waiting for transfers to finish
2022/10/12 13:57:20 ERROR : Attempt 2/3 failed with 1 errors and: Put "https://vai-gggg-backup.s3.us-east-2.amazonaws.com/xattrtest/test1.txt": net/http: invalid header field value for "X-Amz-Meta-Myattr"
2022/10/12 13:57:20 DEBUG : S3 bucket vai-gggg-backup path xattrtest: Waiting for checks to finish
2022/10/12 13:57:20 ERROR : test1.txt: Failed to copy: Put "https://vai-gggg-backup.s3.us-east-2.amazonaws.com/xattrtest/test1.txt": net/http: invalid header field value for "X-Amz-Meta-Myattr"
2022/10/12 13:57:20 DEBUG : test2.txt: Size and modification time the same (differ by 0s, within tolerance 1ns)
2022/10/12 13:57:20 DEBUG : test2.txt: Unchanged skipping
2022/10/12 13:57:20 DEBUG : test3.txt: Size and modification time the same (differ by 0s, within tolerance 1ns)
2022/10/12 13:57:20 DEBUG : test3.txt: Unchanged skipping
2022/10/12 13:57:20 DEBUG : S3 bucket vai-gggg-backup path xattrtest: Waiting for transfers to finish
2022/10/12 13:57:20 ERROR : Attempt 3/3 failed with 1 errors and: Put "https://vai-gggg-backup.s3.us-east-2.amazonaws.com/xattrtest/test1.txt": net/http: invalid header field value for "X-Amz-Meta-Myattr"
2022/10/12 13:57:20 INFO  :
Transferred:   	         12 B / 12 B, 100%, 0 B/s, ETA -
Errors:                 1 (retrying may help)
Checks:                 4 / 4, 100%
Transferred:            2 / 2, 100%
Elapsed time:         0.5s

2022/10/12 13:57:20 DEBUG : 7 go routines active
2022/10/12 13:57:20 Failed to copyto: Put "https://vai-gggg-backup.s3.us-east-2.amazonaws.com/xattrtest/test1.txt": net/http: invalid header field value for "X-Amz-Meta-Myattr"


#METADATA XATTR FIXED VERSION

[root@almatest zack.ramjan]# /varidata/research/software/rclone/rclone-v1.60.0-beta.6481.5d2c36222.fix-s3-metadata-linux-amd64/rclone -vv copyto -M  xattrtest aws:vai-gggg-backup/xattrtest2
2022/10/12 14:00:10 DEBUG : rclone: Version "v1.60.0-beta.6481.5d2c36222.fix-s3-metadata" starting with parameters ["/varidata/research/software/rclone/rclone-v1.60.0-beta.6481.5d2c36222.fix-s3-metadata-linux-amd64/rclone" "-vv" "copyto" "-M" "xattrtest" "aws:vai-gggg-backup/xattrtest2"]
2022/10/12 14:00:10 DEBUG : Creating backend with remote "xattrtest"
2022/10/12 14:00:10 DEBUG : Using config file from "/root/.config/rclone/rclone.conf"
2022/10/12 14:00:10 DEBUG : fs cache: renaming cache item "xattrtest" to be canonical "/varidata/researchtemp/hpctmp/zack.ramjan/xattrtest"
2022/10/12 14:00:10 DEBUG : Creating backend with remote "aws:vai-gggg-backup/xattrtest2"
2022/10/12 14:00:10 DEBUG : S3 bucket vai-gggg-backup path xattrtest2: Waiting for checks to finish
2022/10/12 14:00:10 DEBUG : S3 bucket vai-gggg-backup path xattrtest2: Waiting for transfers to finish
2022/10/12 14:00:10 ERROR : test1.txt: Dropping invalid metadata value "\x02\x04\x90\v\x00\x00\x00\x00\x01\xc1֒P\x00\x00\xe8 \x93P\x00\x00\xe8 \x93P\x00\x00\xa4\x1b\x93P\x00\x01\xc1֒P\x00\x01\x8a#\x93P\x00\x01\x89#\x93P\x00\x01\x15\x06\x93P\x00\x01'\x04\x93P\x00\x01\xeb\x01\x93P\x00\x02\xff\xff\xff\xff" for key "myattr"
2022/10/12 14:00:10 DEBUG : test1.txt: md5 = b1946ac92492d2347c6235b4d2611184 OK
2022/10/12 14:00:10 INFO  : test1.txt: Copied (new)
2022/10/12 14:00:10 DEBUG : test3.txt: md5 = b1946ac92492d2347c6235b4d2611184 OK
2022/10/12 14:00:10 INFO  : test3.txt: Copied (new)
2022/10/12 14:00:10 DEBUG : test2.txt: md5 = b1946ac92492d2347c6235b4d2611184 OK
2022/10/12 14:00:10 INFO  : test2.txt: Copied (new)
2022/10/12 14:00:10 INFO  :
Transferred:   	         18 B / 18 B, 100%, 0 B/s, ETA -
Transferred:            3 / 3, 100%
Elapsed time:         0.3s

2022/10/12 14:00:10 DEBUG : 9 go routines active



#VALIDATION


[root@almatest zack.ramjan]# /varidata/research/software/rclone/rclone-v1.60.0-beta.6481.5d2c36222.fix-s3-metadata-linux-amd64/rclone -vv lsf --format M aws:vai-gggg-backup/xattrtest
2022/10/12 14:03:41 DEBUG : rclone: Version "v1.60.0-beta.6481.5d2c36222.fix-s3-metadata" starting with parameters ["/varidata/research/software/rclone/rclone-v1.60.0-beta.6481.5d2c36222.fix-s3-metadata-linux-amd64/rclone" "-vv" "lsf" "--format" "M" "aws:vai-gggg-backup/xattrtest"]
2022/10/12 14:03:41 DEBUG : Creating backend with remote "aws:vai-gggg-backup/xattrtest"
2022/10/12 14:03:41 DEBUG : Using config file from "/root/.config/rclone/rclone.conf"
{"atime":"2022-10-12T13:36:33.563051-04:00","btime":"2022-10-12T17:57:21Z","content-type":"text/plain; charset=utf-8","gid":"0","mode":"100644","mtime":"2022-10-12T13:36:33.56321-04:00","myattr":"sometextvalue","uid":"0"}
{"atime":"2022-10-12T13:37:27.260352-04:00","btime":"2022-10-12T17:57:21Z","content-type":"text/plain; charset=utf-8","gid":"0","mode":"100644","mtime":"2022-10-12T13:37:27.260641-04:00","uid":"0"}
2022/10/12 14:03:42 DEBUG : 4 go routines active
[root@almatest zack.ramjan]# /varidata/research/software/rclone/rclone-v1.60.0-beta.6481.5d2c36222.fix-s3-metadata-linux-amd64/rclone -vv lsf --format M aws:vai-gggg-backup/xattrtest2
2022/10/12 14:03:47 DEBUG : rclone: Version "v1.60.0-beta.6481.5d2c36222.fix-s3-metadata" starting with parameters ["/varidata/research/software/rclone/rclone-v1.60.0-beta.6481.5d2c36222.fix-s3-metadata-linux-amd64/rclone" "-vv" "lsf" "--format" "M" "aws:vai-gggg-backup/xattrtest2"]
2022/10/12 14:03:47 DEBUG : Creating backend with remote "aws:vai-gggg-backup/xattrtest2"
2022/10/12 14:03:47 DEBUG : Using config file from "/root/.config/rclone/rclone.conf"
{"atime":"2022-10-12T13:36:27.289772-04:00","btime":"2022-10-12T18:00:11Z","content-type":"text/plain; charset=utf-8","gid":"0","mode":"100644","mtime":"2022-10-12T13:36:27.289941-04:00","uid":"0"}
{"atime":"2022-10-12T13:36:33.563051-04:00","btime":"2022-10-12T18:00:11Z","content-type":"text/plain; charset=utf-8","gid":"0","mode":"100644","mtime":"2022-10-12T13:36:33.56321-04:00","myattr":"sometextvalue","uid":"0"}
{"atime":"2022-10-12T13:37:27.260352-04:00","btime":"2022-10-12T18:00:11Z","content-type":"text/plain; charset=utf-8","gid":"0","mode":"100644","mtime":"2022-10-12T13:37:27.260641-04:00","uid":"0"}
2022/10/12 14:03:47 DEBUG : 4 go routines active

Nice! Thank you for testing.

I've merged this to master now which means it will be in the latest beta in 15-30 minutes and released in v1.60

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.