Cannot Read Anything from DigitalOcean (But Can Write)

What is the problem you are having with rclone?

I can create files and folders on a DigitalOcean Space, but I cannot read files (ex. ls and tree always come up blank).

Here is how the issue looks. Uploading local files to a new folder on a remote:

me@mycomputer ~ % rclone copy /Users/me/Documents/Projects/MyProject DigitalOceanSpace:test_folder -vv
2020/07/15 07:01:54 DEBUG : rclone: Version "v1.52.2" starting with parameters ["rclone" "copy" "/Users/me/Documents/Projects/MyProject" "DigitalOceanSpace:test_folder" "-vv"]
2020/07/15 07:01:54 DEBUG : Using config file from "/Users/me/.config/rclone/rclone.conf"
2020/07/15 07:01:54 DEBUG : S3 bucket test_folder: Waiting for checks to finish
2020/07/15 07:01:54 DEBUG : S3 bucket test_folder: Waiting for transfers to finish
2020/07/15 07:01:55 INFO  : S3 bucket test_folder: Bucket "test_folder" created with ACL "public-read"
2020/07/15 07:01:56 DEBUG : pythonFile.py: MD5 = 323c6c95a136cd7d6aca4d58bfae7b14 OK
2020/07/15 07:01:56 INFO  : pythonFile.py: Copied (new)
2020/07/15 07:01:56 DEBUG : output.apkg: MD5 = eb0b1b98f2285d76b782fc50b7fb90a8 OK
2020/07/15 07:01:56 INFO  : output.apkg: Copied (new)
2020/07/15 07:01:56 DEBUG : .DS_Store: MD5 = 194577a7e20bdcc7afbb718f502c134c OK
2020/07/15 07:01:56 INFO  : .DS_Store: Copied (new)
2020/07/15 07:01:56 INFO  :
Transferred:   	  602.215k / 602.215 kBytes, 100%, 287.788 kBytes/s, ETA 0s
Transferred:            3 / 3, 100%
Elapsed time:         2.0s

2020/07/15 07:01:56 DEBUG : 10 go routines active

Then when I try to read it:

me@mycomputer ~ % rclone tree DigitalOceanSpace:test_folder -vv
2020/07/15 07:02:11 DEBUG : rclone: Version "v1.52.2" starting with parameters ["rclone" "tree" "DigitalOceanSpace:test_folder" "-vv"]
2020/07/15 07:02:11 DEBUG : Using config file from "/Users/me/.config/rclone/rclone.conf"
2020/07/15 07:02:12 DEBUG : Stat: filePath="/"
2020/07/15 07:02:12 DEBUG : >Stat: fi=, err=<nil>
2020/07/15 07:02:12 DEBUG : ReadDir: dir=/
2020/07/15 07:02:12 DEBUG : >ReadDir: names=[], err=<nil>
/

0 directories, 0 files
2020/07/15 07:02:12 DEBUG : 6 go routines active
me@mycomputer ~ % rclone tree DigitalOceanSpace: -vv
2020/07/15 07:07:13 DEBUG : rclone: Version "v1.52.2" starting with parameters ["rclone" "tree" "DigitalOceanSpace:" "-vv"]
2020/07/15 07:07:13 DEBUG : Using config file from "/Users/me/.config/rclone/rclone.conf"
2020/07/15 07:07:13 DEBUG : Stat: filePath="/"
2020/07/15 07:07:13 DEBUG : >Stat: fi=, err=<nil>
2020/07/15 07:07:13 DEBUG : ReadDir: dir=/
2020/07/15 07:07:13 DEBUG : >ReadDir: names=[], err=<nil>
/

0 directories, 0 files
2020/07/15 07:07:13 DEBUG : 6 go routines active

What is your rclone version (output from rclone version)

me@mycomputer ~ % rclone version
rclone v1.52.2
- os/arch: darwin/amd64
- go version: go1.14.4

Which OS you are using and how many bits (eg Windows 7, 64 bit)

MacOS 10.15.5, 64 Bit

Which cloud storage system are you using? (eg Google Drive)

DigitalOcean Spaces

The command you were trying to run (eg rclone copy /tmp remote:tmp)

[detailed above]

The rclone config contents with secrets removed.

[DigitalOceanSpace]
type = s3
provider = DigitalOcean
env_auth = false
access_key_id = MY_ACCESS_KEY
secret_access_key = MY_SECRET_KEY
endpoint = [MY_SUBDOMAIN].sfo2.digitaloceanspaces.com
acl = public-read
region = sfo2

I've also tried this with acl = private in my config.

A log from the command with the -vv flag

[detailed above]

Are you using a lower privileged user to do this with? Do they have list bucket permissions?

Can you do

rclone ls -vv --dump bodies DigitalOceanSpace:test_folder

And post the output? That will show whether there is anything actually being returned or not.

Hi Nick - response here:

me@mycomputer ~ % rclone ls -vv --dump bodies DigitalOceanSpace:test_folder
2020/07/15 07:36:43 DEBUG : rclone: Version "v1.52.2" starting with parameters ["rclone" "ls" "-vv" "--dump" "bodies" "DigitalOceanSpace:test_folder"]
2020/07/15 07:36:43 DEBUG : Using config file from "/Users/me/.config/rclone/rclone.conf"
2020/07/15 07:36:43 DEBUG : You have specified to dump information. Please be noted that the Accept-Encoding as shown may not be correct in the request and the response may not show Content-Encoding if the go standard libraries auto gzip encoding was in effect. In this case the body of the request will be gunzipped before showing it.
2020/07/15 07:36:43 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2020/07/15 07:36:43 DEBUG : HTTP REQUEST (req 0xc0000ff300)
2020/07/15 07:36:43 DEBUG : GET /test_folder?delimiter=&max-keys=1000&prefix= HTTP/1.1
Host: [MY_SUBDOMAIN].sfo2.digitaloceanspaces.com
User-Agent: rclone/v1.52.2
Authorization: XXXX
X-Amz-Content-Sha256: [SECRET?]
X-Amz-Date: [SECRET?]
Accept-Encoding: gzip

2020/07/15 07:36:43 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2020/07/15 07:36:44 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2020/07/15 07:36:44 DEBUG : HTTP RESPONSE (req 0xc0000ff300)
2020/07/15 07:36:44 DEBUG : HTTP/1.1 200 OK
Content-Length: 148
Accept-Ranges: bytes
Content-Type: binary/octet-stream
Date: Wed, 15 Jul 2020 14:36:44 GMT
Etag: "[SECRET?]"
Last-Modified: Wed, 15 Jul 2020 14:01:55 GMT
Strict-Transport-Security: max-age=15552000; includeSubDomains; preload
Vary: Origin, Access-Control-Request-Headers, Access-Control-Request-Method
X-Amz-Request-Id: [SECRET?]

<CreateBucketConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><LocationConstraint>sfo2</LocationConstraint></CreateBucketConfiguration>
2020/07/15 07:36:44 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2020/07/15 07:36:44 DEBUG : 6 go routines active

I'm not sure how to check which permissions my user has, to be honest. I'm just using the only user that I have in my DigitalOcean account.

Two more tidbits that might be useful:

  1. I can log in to my DigitalOcean account and see that the files are indeed being created in the Space
  2. When using the exact same copy command again, it looks like it copies the same files as (new) each time (ie. it's not seeing that it just copied them and they're already there)

That is weird indeed! You sent a request saying list the bucket, but you got back a reply to something else entirely.

I think the problem is here

Use

endpoint = sfo2.digitaloceanspaces.com

That is what my config looks like.

My new config file:

[DigitalOceanSpace]
type = s3
provider = DigitalOcean
env_auth = false
access_key_id = MY_ACCESS_KEY
secret_access_key = MY_SECRET_KEY
endpoint = sfo2.digitaloceanspaces.com
acl = public-read
region = sfo2

New response from the previous command:

me@mycomputer ~ % rclone ls -vv --dump bodies DigitalOceanSpace:test_folder
2020/07/15 08:03:10 DEBUG : rclone: Version "v1.52.2" starting with parameters ["rclone" "ls" "-vv" "--dump" "bodies" "DigitalOceanSpace:test_folder"]
2020/07/15 08:03:10 DEBUG : Using config file from "/Users/me/.config/rclone/rclone.conf"
2020/07/15 08:03:10 DEBUG : You have specified to dump information. Please be noted that the Accept-Encoding as shown may not be correct in the request and the response may not show Content-Encoding if the go standard libraries auto gzip encoding was in effect. In this case the body of the request will be gunzipped before showing it.
2020/07/15 08:03:10 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2020/07/15 08:03:10 DEBUG : HTTP REQUEST (req 0xc000334e00)
2020/07/15 08:03:10 DEBUG : GET /test_folder?delimiter=&max-keys=1000&prefix= HTTP/1.1
Host: sfo2.digitaloceanspaces.com
User-Agent: rclone/v1.52.2
Authorization: XXXX
X-Amz-Content-Sha256: [SECRET?]
X-Amz-Date: [SECRET?]
Accept-Encoding: gzip

2020/07/15 08:03:10 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2020/07/15 08:03:11 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2020/07/15 08:03:11 DEBUG : HTTP RESPONSE (req 0xc000334e00)
2020/07/15 08:03:11 DEBUG : HTTP/1.1 404 Not Found
Content-Length: 217
Accept-Ranges: bytes
Content-Type: application/xml
Date: Wed, 15 Jul 2020 15:03:11 GMT
Strict-Transport-Security: max-age=15552000; includeSubDomains; preload
X-Amz-Request-Id: [SECRET?]

<?xml version="1.0" encoding="UTF-8"?><Error><Code>NoSuchBucket</Code><BucketName>test_folder</BucketName><RequestId>tx000000000000041420885-005f0f1aaf-95f8c6-sfo2a</RequestId><HostId>95f8c6-sfo2a-sfo</HostId></Error>
2020/07/15 08:03:11 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2020/07/15 08:03:11 DEBUG : 6 go routines active
2020/07/15 08:03:11 Failed to ls: directory not found

It looks like it can't find it without my subdomain? And unless I'm mistaken, I think DigitalOcean requires me to create the Space with a subdomain (not just sfo2.digitaloceanspaces.com + my access keys).

Interestingly, when I change my config back to the original and run the generic (no bucket specified)...

rclone ls -vv --dump bodies DigitalOceanSpace:

...it does dump all the contents in my DigitalOcean Space.

With this config just try rclone lsd DigitalOceanSpace: - that should show you what buckets you have.

If you look at AWS docs you'll see that you can't have an _ in a bucket name: https://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html#bucketnamingrules

So your bucket name may have been translated to something else?

1 Like

Amazing, thank you. I do have some folders with _ in the name, but not buckets. I'm actually just realizing that I have a single bucket, and that bucket's name is the subdomain that I chose in DigitalOcean. Sorry if this was obvious, but I think that was the issue.

For anyone with this problem in the future...

Correct config file (you may want to change acl to private):

[DigitalOceanSpace]
type = s3
provider = DigitalOcean
env_auth = false
access_key_id = MY_ACCESS_KEY
secret_access_key = MY_SECRET_KEY
endpoint = sfo2.digitaloceanspaces.com
acl = public-read
region = sfo2

Correct way to reference files in the single bucket in DigitalOcean:

rclone ls DigitalOceanSpace:[SUBDOMAIN] -vv

rclone copy /Users/me/Documents/Projects/MyProject DigitalOceanSpace:[SUBDOMAIN]/DestinationFolder -vv

etc.

Thanks again!

Ah that makes sense! I thought where you wrote [SUBDOMAIN] you just meant the bucket name.

Yes what you wrote is correct - you need to put the bucket name in. If you want you can use the alias backend to make a shorter rclone url.

1 Like

Got it, thanks!

Now on to layering in crypt :slight_smile: ...

1 Like

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.