Unable to upload files to Terrahost's S3 service using rclone

What is the problem you are having with rclone?

Hi.

I am unable to upload files to Terrahost's S3 service using rclone.

rclone -vv lsd ths3: works:

Terminal output
2023/07/20 13:05:56 DEBUG : rclone: Version "v1.63.0" starting with parameters ["rclone" "-vv" "lsd" "ths3:"]
2023/07/20 13:05:56 DEBUG : Creating backend with remote "ths3:"
2023/07/20 13:05:56 DEBUG : Using config file from "/home/kartik/.config/rclone/rclone.conf"
2023/07/20 13:05:56 DEBUG : name = "ths3", root = "", opt = &s3.Options{Provider:"Other", EnvAuth:false, AccessKeyID:"accesskeyidaccesskeyid", SecretAccessKey:"secretaccesskeysecretaccesskey", Region:"no-south-1", Endpoint:"s3.terrahost.no", STSEndpoint:"", LocationConstraint:"", ACL:"", BucketACL:"", RequesterPays:false, ServerSideEncryption:"", SSEKMSKeyID:"", SSECustomerAlgorithm:"", SSECustomerKey:"", SSECustomerKeyBase64:"", SSECustomerKeyMD5:"", StorageClass:"", UploadCutoff:209715200, CopyCutoff:4999341932, ChunkSize:5242880, MaxUploadParts:10000, DisableChecksum:false, SharedCredentialsFile:"", Profile:"", SessionToken:"", UploadConcurrency:4, ForcePathStyle:true, V2Auth:false, UseAccelerateEndpoint:false, LeavePartsOnError:false, ListChunk:1000, ListVersion:0, ListURLEncode:fs.Tristate{Value:false, Valid:false}, NoCheckBucket:false, NoHead:false, NoHeadObject:false, Enc:0x3000002, MemoryPoolFlushTime:60000000000, MemoryPoolUseMmap:false, DisableHTTP2:false, DownloadURL:"", DirectoryMarkers:false, UseMultipartEtag:fs.Tristate{Value:false, Valid:false}, UsePresignedRequest:false, Versions:false, VersionAt:fs.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}, Decompress:false, MightGzip:fs.Tristate{Value:false, Valid:false}, UseAcceptEncodingGzip:fs.Tristate{Value:false, Valid:false}, NoSystemMetadata:false}
2023/07/20 13:05:56 DEBUG : Resolving service "s3" region "no-south-1"
          -1 2023-07-19 12:29:56        -1 test-bucket
2023/07/20 13:05:57 DEBUG : 6 go routines active

But I cannot copy a file to it using rclone -vv copyto s3/test.txt ths3:test-bucket/test.txt. I have pasted the log below the appropriate subheading.

The strange thing is that Terrahost's S3 service seems to be pretty barebones. I don't see any options pertaining to access control in the UI. Yet somehow lsd works but copy and copyto don't. It makes me think I am not configuring the endpoint properly in rclone. But I don't know what I am doing wrong.

I have also opened a support ticket with Terrahost but have yet to receive a response from them. Meanwhile, I thought I would ask here as well just in case someone is able to spot what I am doing wrong.

Any advice is appreciated. Thanks very much for your time.

Run the command 'rclone version' and share the full output of the command.

rclone v1.63.0
- os/version: void (64 bit)
- os/kernel: 6.3.12_1 (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.20.5
- go/linking: dynamic
- go/tags: noselfupdate

I also tried updating to 1.63.1 but I still get the same error.

Which cloud storage system are you using? (eg Google Drive)

Terrahost's S3 Object Storage

The rclone config contents with secrets removed.

Upon creating a bucket, I received an email containing this information from Terrahost:

Hostname: s3.terrahost.no
Port: 443
Region: no-south-1

Bucket: test-bucket
Access Key: accesskeyidaccesskeyid
Secret Key: secretaccesskeysecretaccesskey
Use Path Style Endpoint: Yes

Using this information, I created a config as follows:

[ths3]
type = s3
provider = Other
access_key_id = accesskeyidaccesskeyid
secret_access_key = secretaccesskeysecretaccesskey
endpoint = s3.terrahost.no
region = no-south-1
force_path_style = true

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone -vv copyto s3/test.txt ths3:test-bucket/test.txt

A log from the command with the -vv flag

-vv output
2023/07/20 13:07:30 DEBUG : rclone: Version "v1.63.0" starting with parameters ["rclone" "-vv" "copyto" "s3/test.txt" "ths3:test-bucket/test.txt"]
2023/07/20 13:07:30 DEBUG : Creating backend with remote "s3/test.txt"
2023/07/20 13:07:30 DEBUG : Using config file from "/home/kartik/.config/rclone/rclone.conf"
2023/07/20 13:07:30 DEBUG : fs cache: adding new entry for parent of "s3/test.txt", "/home/kartik/s3"
2023/07/20 13:07:30 DEBUG : Creating backend with remote "ths3:test-bucket/"
2023/07/20 13:07:30 DEBUG : name = "ths3", root = "test-bucket/", opt = &s3.Options{Provider:"Other", EnvAuth:false, AccessKeyID:"accesskeyidaccesskeyid", SecretAccessKey:"secretaccesskeysecretaccesskey", Region:"no-south-1", Endpoint:"s3.terrahost.no", STSEndpoint:"", LocationConstraint:"", ACL:"", BucketACL:"", RequesterPays:false, ServerSideEncryption:"", SSEKMSKeyID:"", SSECustomerAlgorithm:"", SSECustomerKey:"", SSECustomerKeyBase64:"", SSECustomerKeyMD5:"", StorageClass:"", UploadCutoff:209715200, CopyCutoff:4999341932, ChunkSize:5242880, MaxUploadParts:10000, DisableChecksum:false, SharedCredentialsFile:"", Profile:"", SessionToken:"", UploadConcurrency:4, ForcePathStyle:true, V2Auth:false, UseAccelerateEndpoint:false, LeavePartsOnError:false, ListChunk:1000, ListVersion:0, ListURLEncode:fs.Tristate{Value:false, Valid:false}, NoCheckBucket:false, NoHead:false, NoHeadObject:false, Enc:0x3000002, MemoryPoolFlushTime:60000000000, MemoryPoolUseMmap:false, DisableHTTP2:false, DownloadURL:"", DirectoryMarkers:false, UseMultipartEtag:fs.Tristate{Value:false, Valid:false}, UsePresignedRequest:false, Versions:false, VersionAt:fs.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}, Decompress:false, MightGzip:fs.Tristate{Value:false, Valid:false}, UseAcceptEncodingGzip:fs.Tristate{Value:false, Valid:false}, NoSystemMetadata:false}
2023/07/20 13:07:30 DEBUG : Resolving service "s3" region "no-south-1"
2023/07/20 13:07:30 DEBUG : fs cache: renaming cache item "ths3:test-bucket/" to be canonical "ths3:test-bucket"
2023/07/20 13:07:31 DEBUG : test.txt: Need to transfer - File not found at Destination
2023/07/20 13:07:31 ERROR : test.txt: Failed to copy: AccessDenied: Access Denied.
        status code: 403, request id: 1773834CE6E1851E, host id: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
2023/07/20 13:07:31 ERROR : Attempt 1/3 failed with 1 errors and: AccessDenied: Access Denied.
        status code: 403, request id: 1773834CE6E1851E, host id: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
2023/07/20 13:07:31 DEBUG : test.txt: Need to transfer - File not found at Destination
2023/07/20 13:07:31 ERROR : test.txt: Failed to copy: AccessDenied: Access Denied.
        status code: 403, request id: 1773834CFDB5A735, host id: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
2023/07/20 13:07:31 ERROR : Attempt 2/3 failed with 1 errors and: AccessDenied: Access Denied.
        status code: 403, request id: 1773834CFDB5A735, host id: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
2023/07/20 13:07:32 DEBUG : test.txt: Need to transfer - File not found at Destination
2023/07/20 13:07:32 ERROR : test.txt: Failed to copy: AccessDenied: Access Denied.
        status code: 403, request id: 1773834D1497BBAD, host id: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
2023/07/20 13:07:32 ERROR : Attempt 3/3 failed with 1 errors and: AccessDenied: Access Denied.
        status code: 403, request id: 1773834D1497BBAD, host id: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
2023/07/20 13:07:32 INFO  :
Transferred:              0 B / 0 B, -, 0 B/s, ETA -
Errors:                 1 (retrying may help)
Elapsed time:         1.5s

2023/07/20 13:07:32 DEBUG : 7 go routines active
2023/07/20 13:07:32 Failed to copyto: AccessDenied: Access Denied.
        status code: 403, request id: 1773834D1497BBAD, host id: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855

At first look it seems that it is bucket permission problems - you have only read access.

I am certain it's not that. The reason being that the administration dashboard does not allow access control. You only get one pair of access key ID/secret key per bucket. The only available option is to reset the secret key.

I have also been trying to use the bucket as an image store for something called pict-rs. I can both upload and download images through pict-rs. It looks like rclone ls not giving the correct output as it shows the bucket as empty even though files are there on it.

this is the correct config for terrahost.

[terrahost]
type = s3
provider = Minio
access_key_id = redacted
secret_access_key = redacted
endpoint = https://s3.terrahost.no
region = no-south-1
force_path_style = true

as for status code: 403, this is from tech support, looks like their entire s3 storage is down
"ok, this may be due to an issue that's currently being investigated, you will be notified by ticket once it has been resolved"
"we're currently having an issue with s3 and you will notified once it's working."

off topic: i like to check out new s3 providers, so far this is the worst one i have ever tried to use.
so i am curious, why use terrahost?

--- very expensive, "Minimum monthly charge across all buckets is $11.5 per month (500GB)."
--- almost zero feature set, no IAM/bucket polices, no SSE, no session tokens and ....
--- little to no documentation.
--- and maybe worst of all, they send secret credentials as plain-text emails?

Well we wanted to try it because it's our VPS provider too. We didn't know it was this bad. We are using some other S3 provider right now (swissmade).

We got a response to our support ticket too. The rep says it is something wrong on their end.

The problem turned out to be something completely unrelated to rclone. There is a problem with Terrahost's S3 backend which has been acknowledged by the service in a support ticket.

Thanks to all those that responded. Sorry for wasting your time.

1 Like

did Terrahost fix the problem?

I honestly don't know. Sorry. I cancelled the bucket and closed the ticket since I needed a working one ASAP and used a different provider.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.