Setup rclone on a headless CentOS7 to sync to an S3 compatible storage. Run rclone sync with -vv option and noticed that most of the XML coping is failing with SignatureDoesNotMatch error (see below)!
Any recommendations?
2018/12/27 14:03:41 INFO : 1080i 59.94/Mac Pro 4/WaveformCache/WaveformCache_262144x.awf: Copied (new)
2018/12/27 14:03:41 INFO : 1080i 59.94/NBCZ800MC892/WaveformCache/WaveformCache_65536x.awf: Copied (new)
2018/12/27 14:03:41 INFO : 1080i 59.94/Mac Pro 4/WaveformCache/WaveformCache_65536x.awf: Copied (new)
2018/12/27 14:03:41 INFO : 1080i 59.94/SG-Z840-BASE/SearchData/SearchDB: Copied (new)
2018/12/27 14:03:41 ERROR : 1080i 59.94/Z440-IMAGE/1080i 59.94 Settings.xml: Failed to copy: SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. For more information, see REST Authentication and SOAP Authentication for details.
status code: 403, request id: be7e97cd-e163-15e9-8561-d8c49756f210, host id:
2018/12/27 14:03:41 INFO : 1080i 59.94/PROTOOLS001/WaveformCache/WaveformCache_16384x.awf: Copied (new)
That is odd indeed! v2 vs v4 auth can cause this problem but you’ve tried that.
You could also try --s3-force-path-style which may help.
I note there are spaces in the xml file names which might also be the problem so it might be worth trying to upload some different files with spaces in, eg “hello world.txt”
I suspect it might be a bug in Cloudian (rclone has found a whole raft of bugs in s3 compatible interfaces!).
Can you make a log with -vv --dump bodies uploading a very small XML file - that would be very interesting and might shed some light on matters.
Hi ncw
forcing the path style didn’t help either!
It doesn’t have issue with copying files that have spaces. I tried syncing/copying files with spaces plus all the other files that are non XML are copying with spaces with no issues.
I created a log file with -vv --dump for more details on the issue.
I used s3 browse and it uploaded with no issues!.
I’ll enable the debug on cloudian for more logs but I created a small XML and rclone failed to move it as well!!
Hi ncw
After getting the debugger going on the Cloudian Storage. I believe I know whats causing the issue. rclone is specifying the character-set which is causing the authentication mismatch failures. I tried the same XML with S3 Browse and S3cmd and both uploads were successful and logs didn’'t show a character-set flag.
I set the same XML to an unknown type (.bk) and rclone uploaded with no issues (logs didn’t show a character-set).
I’m not sure if this a correct s3 behavior nor do I know if Amazon supports it but Cloudian is built completely off Amazon (what works for Amazon works for Cloudian)
I started a case with Cloudian but is there any flags in rclone to prevent it from setting the character set?
Text types have the charset parameter set to “utf-8” by default.
I made a version of rclone which strips “; charset=utf-8” from mime types. This isn’t something I want to merge, but you can give it a go to see if it fixes the problem!
My suspicion is it is either the space or the ; that Cloudian doesn’t like - I suspect it is expecting them URL encoded whereas rclone is sending them in the clear (which is correct I think).
I’m getting a 404 Not Found error on any upload (XML, non XML)!! Looking at dump bodies, the put request has the options as attributes and not headers!!
PUT /projects-bkup/SG05-CS_PROJECTS/Avid%20User%20Settings/Mick%208.6%20Settings.xml?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=009975676cca1515ffbb%2F20190108%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20190108T205247Z&X-Amz-Expires=900&X-Amz-SignedHeaders=content-md5%3Bcontent-type%3Bhost%3Bx-amz-acl%3Bx-amz-meta-mtime&X-Amz-Signature=f4a2ab1cf538cf980c67a0f1f166d6aa146126d765f882b5806deca91ae01516 HTTP/1.1
Thanks that worked. I really appreciate it.
The --s3-upload-cutoff 0 option isn’t documented because I looked and knew that you changed something but couldn’t find any documentation about it.
Do you think you can role MIME TYPES in an official build?
Great. It is a slight worry that you needed to use it though. I’ve changed the single part upload method to be more efficient which works well with s3/digital ocean/ceph/minio but maybe doesn’t work for all s3 “compatible” solutions.