Sync does not delete old files

If the permissions are wrong, how can it upload files then? The permissions are set to "private", but that's how it should be (I think), and I entered the necessary authentication info in the rclone config.

All your commands show nothing while I see in my browser that files er there. That seems to point to permissions, as you say. Let me check them and get back to you. I can't figure out though that there are write permissions without read permissions.

BC

I seem to have set the correct permissions:

BC

I have a suspicion that, or Rclone, or Linode, or both, do something different compared to other S3 storage services. The fact that Nick's commands have no output, indicate this.
I could do some scripting magic where I, instead of syncing, will move, delete and replace files. Question is if deleting with rclone does work fine and does not suffer the same problems as reading files.

If a developer needs to test with a Linode bucket: I can create you one so you can test.

Cheers,

BC

This post indicates that rclone works ok with linode: https://www.linode.com/community/questions/20477/linode-object-storage-bucket-and-rclone

Which makes me think - can you remove (or comment out with #) the region and location_constraint lines in your config.

Also does the endpoint look like eu-central-1.linodeobjects.com - it should not have a bucket name in front of it.

That would be great if the above doesn't work! You can private message me details.

Hi, I know this link, it helped me decide to use rclone with Linode S3.
Your suggestions result in errors I'm afraid, I will troubleshoot further but this is what I have now:

2021/01/14 21:19:16 ERROR : : error reading destination directory: AccessDenied:
        status code: 403, request id: tx000000000000035f075f3-006000a743-7c940a-default, host id:
2021/01/14 21:19:16 ERROR : S3 bucket backup path pve: not deleting files as there were IO errors
2021/01/14 21:19:16 ERROR : S3 bucket backup path pve: not deleting directories as there were IO errors
2021/01/14 21:19:16 INFO  : There was nothing to transfer
2021/01/14 21:19:16 ERROR : Attempt 1/3 failed with 1 errors and: AccessDenied:
        status code: 403, request id: tx000000000000035f075f3-006000a743-7c940a-default, host id:
2021/01/14 21:19:16 ERROR : : error reading destination directory: AccessDenied:
        status code: 403, request id: tx00000000000000d21c194-006000a744-1066d89-default, host id:
2021/01/14 21:19:16 ERROR : S3 bucket backup path pve: not deleting files as there were IO errors
2021/01/14 21:19:16 ERROR : S3 bucket backup path pve: not deleting directories as there were IO errors
2021/01/14 21:19:16 INFO  : There was nothing to transfer
2021/01/14 21:19:16 ERROR : Attempt 2/3 failed with 1 errors and: AccessDenied:
        status code: 403, request id: tx00000000000000d21c194-006000a744-1066d89-default, host id:
2021/01/14 21:19:16 ERROR : : error reading destination directory: AccessDenied:
        status code: 403, request id: tx0000000000000043e1726-006000a744-11b5f92-default, host id:
2021/01/14 21:19:16 ERROR : S3 bucket backup path pve: not deleting files as there were IO errors
2021/01/14 21:19:16 ERROR : S3 bucket backup path pve: not deleting directories as there were IO errors
2021/01/14 21:19:16 INFO  : There was nothing to transfer
2021/01/14 21:19:16 ERROR : Attempt 3/3 failed with 1 errors and: AccessDenied:
        status code: 403, request id: tx0000000000000043e1726-006000a744-11b5f92-default, host id:
2021/01/14 21:19:16 INFO  :
Transferred:             0 / 0 Bytes, -, 0 Bytes/s, ETA -
Errors:                 1 (retrying may help)
Elapsed time:         0.6s

2021/01/14 21:19:16 Failed to sync: AccessDenied:
        status code: 403, request id: tx0000000000000043e1726-006000a744-11b5f92-default, host id:

I added the bucket again and now it is running. A run takes 3.5 hours, so I will report back by then.

Edit: I reduced the data set, so I should know something quite soon.

BC

OK, test is finished, exact the same behaviour, old files are not deleted.
Please advise on how to continue,

cheers,

BC

Nick,
what is interesting, is that in the post you referred to: https://www.linode.com/community/questions/20477/linode-object-storage-bucket-and-rclone#answer-75193
the guy reacting states that he is using AWS S3, while I was using S3 form "other".
I wonder if the problem is situated there. I will experiment with AWS S3 and see if I can get things moving.

BC

OK, I changed these parameters:

provider = AWS
endpoint = eu-central-1.linodeobjects.com

My rclone command has changed to:

clone sync --log-file=/tmp/backup_linodes3.log --log-level=DEBUG --delete-after /datapool1/dump/ linode_dekringwinkel:dekringwinkel/

The lsf commands and the sync deletion works perfectly now.

What does not work is subdirectories, I will troubleshoot that now.
EDIT: subdirs are fine now as well, I can simply add them to the target definition.

Cheers,

BC

Great - well done!

I wonder whether we should make a Linode provider?

I will investigate further...

Have you any pointers to docs?

Hey Nick,

You definitely could do that. They seem to follow the AWS concept, but it might help Linode customers to get a working solution fast. Not sure about their size and popularity but I think they are not a small player anymore.
Docs: you mean Linode documentation? I believe they have some, but I think you need to be a paying customer to get access to them. I could always look for them and send you an exported PDF.

Cheers,

BC

image

I emailled Linode requesting to become a partner in the hope of adding rclone to the supported providers and running integration tests against Linode.

This doc looks pretty good: https://www.linode.com/docs/guides/how-to-use-object-storage/

I could do with a list of possible endpoints.

Sounds awesome.
Will you be updating in this thread?

Cheers,

BC

Yes I will do that :slight_smile:

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.