Vultr s3 fails when crypt is selected

What is the problem you are having with rclone?

I have created a crypt config over a vultr s3.
Sync using plain works, but when i try to sync to the encrypted version it gives me AccessDenied

Run the command 'rclone version' and share the full output of the command.

rclone v1.63.1
- os/version: linuxmint 21.1 (64 bit)
- os/kernel: 5.19.0-32-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.20.6
- go/linking: static
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

vultr s3 object storage

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone sync /tmp/teste/ vultrc:

The rclone config contents with secrets removed.

[vultr]
type = s3
provider = Other
access_key_id = 
secret_access_key = 
endpoint = 
acl = private

[vultrc]
type = crypt
remote = vultr:backup
password = 
password2 = 

A log from the command with the -vv flag

Here

what about these commands, to test that the bucket exists and have permissions to copy a file.

  • rclone mkdir vultr:backup -vv
  • rclone copy file.txt vultr:backup -vv
    note: file.ext is a plain text file, change that to match your system
  • rclone ls vultr:backup -vv
# rclone mkdir vultr:backup -vv
2023/07/26 22:23:25 DEBUG : rclone: Version "v1.63.1" starting with parameters ["rclone" "mkdir" "vultr:backup" "-vv"]
2023/07/26 22:23:25 DEBUG : Creating backend with remote "vultr:backup"
2023/07/26 22:23:25 DEBUG : Using config file from "/home/salatiel/.config/rclone/rclone.conf"
2023/07/26 22:23:25 DEBUG : name = "vultr", root = "backup", opt = &s3.Options{Provider:"Other", EnvAuth:false, AccessKeyID:"W232WO3WEAUDNA0KL0Z0", SecretAccessKey:"4SXtPgEubqaDyDqC0ETxmKTGJHt2GtrwgpY3NVh2", Region:"", Endpoint:"sjc1.vultrobjects.com", STSEndpoint:"", LocationConstraint:"", ACL:"private", BucketACL:"", RequesterPays:false, ServerSideEncryption:"", SSEKMSKeyID:"", SSECustomerAlgorithm:"", SSECustomerKey:"", SSECustomerKeyBase64:"", SSECustomerKeyMD5:"", StorageClass:"", UploadCutoff:209715200, CopyCutoff:4999341932, ChunkSize:5242880, MaxUploadParts:10000, DisableChecksum:false, SharedCredentialsFile:"", Profile:"", SessionToken:"", UploadConcurrency:4, ForcePathStyle:true, V2Auth:false, UseAccelerateEndpoint:false, LeavePartsOnError:false, ListChunk:1000, ListVersion:0, ListURLEncode:fs.Tristate{Value:false, Valid:false}, NoCheckBucket:false, NoHead:false, NoHeadObject:false, Enc:0x3000002, MemoryPoolFlushTime:60000000000, MemoryPoolUseMmap:false, DisableHTTP2:false, DownloadURL:"", DirectoryMarkers:false, UseMultipartEtag:fs.Tristate{Value:false, Valid:false}, UsePresignedRequest:false, Versions:false, VersionAt:fs.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}, Decompress:false, MightGzip:fs.Tristate{Value:false, Valid:false}, UseAcceptEncodingGzip:fs.Tristate{Value:false, Valid:false}, NoSystemMetadata:false}
2023/07/26 22:23:25 DEBUG : Resolving service "s3" region "us-east-1"
2023/07/26 22:23:25 DEBUG : S3 bucket backup: Making directory
2023/07/26 22:23:26 DEBUG : 5 go routines active
# 

After the first command, no bucket has been created.

# rclone copy /tmp/data/file.txt  vultr:backup -vv
2023/07/26 22:25:22 DEBUG : rclone: Version "v1.63.1" starting with parameters ["rclone" "copy" "/tmp/data/file.txt" "vultr:backup" "-vv"]
2023/07/26 22:25:22 DEBUG : Creating backend with remote "/tmp/data/file.txt"
2023/07/26 22:25:22 DEBUG : Using config file from "/home/eu/.config/rclone/rclone.conf"
2023/07/26 22:25:22 DEBUG : fs cache: adding new entry for parent of "/tmp/data/file.txt", "/tmp/data"
2023/07/26 22:25:22 DEBUG : Creating backend with remote "vultr:backup"
2023/07/26 22:25:22 DEBUG : name = "vultr", root = "backup", opt = &s3.Options{Provider:"Other", EnvAuth:false, AccessKeyID:"", SecretAccessKey:"", Region:"", Endpoint:"sjc1.vultrobjects.com", STSEndpoint:"", LocationConstraint:"", ACL:"private", BucketACL:"", RequesterPays:false, ServerSideEncryption:"", SSEKMSKeyID:"", SSECustomerAlgorithm:"", SSECustomerKey:"", SSECustomerKeyBase64:"", SSECustomerKeyMD5:"", StorageClass:"", UploadCutoff:209715200, CopyCutoff:4999341932, ChunkSize:5242880, MaxUploadParts:10000, DisableChecksum:false, SharedCredentialsFile:"", Profile:"", SessionToken:"", UploadConcurrency:4, ForcePathStyle:true, V2Auth:false, UseAccelerateEndpoint:false, LeavePartsOnError:false, ListChunk:1000, ListVersion:0, ListURLEncode:fs.Tristate{Value:false, Valid:false}, NoCheckBucket:false, NoHead:false, NoHeadObject:false, Enc:0x3000002, MemoryPoolFlushTime:60000000000, MemoryPoolUseMmap:false, DisableHTTP2:false, DownloadURL:"", DirectoryMarkers:false, UseMultipartEtag:fs.Tristate{Value:false, Valid:false}, UsePresignedRequest:false, Versions:false, VersionAt:fs.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}, Decompress:false, MightGzip:fs.Tristate{Value:false, Valid:false}, UseAcceptEncodingGzip:fs.Tristate{Value:false, Valid:false}, NoSystemMetadata:false}
2023/07/26 22:25:22 DEBUG : Resolving service "s3" region "us-east-1"
2023/07/26 22:25:22 ERROR : Attempt 1/3 failed with 1 errors and: Forbidden: Forbidden
	status code: 403, request id: tx000009ef8572b0866e3ae-0064c1c782-13ebd6fe-sjc1, host id: 
2023/07/26 22:25:22 ERROR : Attempt 2/3 failed with 1 errors and: Forbidden: Forbidden
	status code: 403, request id: tx000006906f7c6ecd0287f-0064c1c782-13f7a5c0-sjc1, host id: 
2023/07/26 22:25:23 ERROR : Attempt 3/3 failed with 1 errors and: Forbidden: Forbidden
	status code: 403, request id: tx000000a4869c0070c1db2-0064c1c783-13ebd6fe-sjc1, host id: 
2023/07/26 22:25:23 INFO  : 
Transferred:   	          0 B / 0 B, -, 0 B/s, ETA -
Errors:                 1 (retrying may help)
Elapsed time:         0.9s

2023/07/26 22:25:23 DEBUG : 5 go routines active
2023/07/26 22:25:23 Failed to copy: Forbidden: Forbidden
	status code: 403, request id: tx000000a4869c0070c1db2-0064c1c783-13ebd6fe-sjc1, host id: 
# 
rclone ls vultr:backup -vv
2023/07/26 22:25:56 DEBUG : rclone: Version "v1.63.1" starting with parameters ["rclone" "ls" "vultr:backup" "-vv"]
2023/07/26 22:25:56 DEBUG : Creating backend with remote "vultr:backup"
2023/07/26 22:25:56 DEBUG : Using config file from "/home/eu/.config/rclone/rclone.conf"
2023/07/26 22:25:56 DEBUG : name = "vultr", root = "backup", opt = &s3.Options{Provider:"Other", EnvAuth:false, AccessKeyID:"", SecretAccessKey:"", Region:"", Endpoint:"sjc1.vultrobjects.com", STSEndpoint:"", LocationConstraint:"", ACL:"private", BucketACL:"", RequesterPays:false, ServerSideEncryption:"", SSEKMSKeyID:"", SSECustomerAlgorithm:"", SSECustomerKey:"", SSECustomerKeyBase64:"", SSECustomerKeyMD5:"", StorageClass:"", UploadCutoff:209715200, CopyCutoff:4999341932, ChunkSize:5242880, MaxUploadParts:10000, DisableChecksum:false, SharedCredentialsFile:"", Profile:"", SessionToken:"", UploadConcurrency:4, ForcePathStyle:true, V2Auth:false, UseAccelerateEndpoint:false, LeavePartsOnError:false, ListChunk:1000, ListVersion:0, ListURLEncode:fs.Tristate{Value:false, Valid:false}, NoCheckBucket:false, NoHead:false, NoHeadObject:false, Enc:0x3000002, MemoryPoolFlushTime:60000000000, MemoryPoolUseMmap:false, DisableHTTP2:false, DownloadURL:"", DirectoryMarkers:false, UseMultipartEtag:fs.Tristate{Value:false, Valid:false}, UsePresignedRequest:false, Versions:false, VersionAt:fs.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}, Decompress:false, MightGzip:fs.Tristate{Value:false, Valid:false}, UseAcceptEncodingGzip:fs.Tristate{Value:false, Valid:false}, NoSystemMetadata:false}
2023/07/26 22:25:56 DEBUG : Resolving service "s3" region "us-east-1"
2023/07/26 22:25:57 DEBUG : 5 go routines active
2023/07/26 22:25:57 Failed to ls: AccessDenied: 
	status code: 403, request id: tx00000bc9dcef1468634f8-0064c1c7a5-13f4a24a-sjc1, host id: 

My guess would be that the bucket named backup is owned by someone else. s3 has a global namespace so you need to pick a name that no-one else is using. I don't know whether this is true for vultr or not, but it would be worth trying.

@ncw Nice catch! I had completely forgot that the namespace is global.
It worked, tks!

1 Like

I have made this mistake more than once despite being an "s3 expert" :wink:

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.