Rclone backend restore doesn't work with Crypt storage

What is the problem you are having with rclone?

I can't restore a GLACIER object to the STANDARD class on a Crypt storage

What is your rclone version (output from rclone version)

rclone v1.53.3-DEV

Which OS you are using and how many bits (eg Windows 7, 64 bit)

linux/amd64 (Debian)

Which cloud storage system are you using? (eg Google Drive)

Scaleway

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone backend restore my_crypt_storage:my_folder/ -o priority=Standard

The rclone config contents with secrets removed.

[scaleway_storage]
env_auth = false
endpoint = https://s3.fr-par.scw.cloud
location_constraint = 
region = fr-par
access_key_id = XXXXX
secret_access_key = XXXXX
acl = private
provider = Scaleway
type = s3
server_side_encryption = 
[my_crypt_storage]
password2 = XXXXX
remote = scaleway_storage:my_bucket
filename_encryption = standard
directory_name_encryption = true
password = XXXXX
type = crypt

A log from the command with the -vv flag

2021/01/11 11:24:42 DEBUG : rclone: Version "v1.53.3-DEV" starting with parameters ["rclone" "backend" "restore" "my_crypt_storage:my_folder" "-o" "priority=Standard" "-vv"]
2021/01/11 11:24:42 DEBUG : Using config file from "/root/.config/rclone/rclone.conf"
2021/01/11 11:24:42 DEBUG : Creating backend with remote "scaleway_storage:my_bucket/pt8t7ihdmid0dchbkg1v29n7fc"
2021/01/11 11:24:43 DEBUG : 4 go routines active
2021/01/11 11:24:43 Failed to backend: command "restore" failed: command not found

Can you share the output of:

rclone backend features  scaleway_storage:

Yes, sure:

{
	"Name": "scaleway_storage",
	"Root": "",
	"String": "S3 root",
	"Precision": 1,
	"Hashes": [
		"MD5"
	],
	"Features": {
		"About": false,
		"BucketBased": true,
		"BucketBasedRootOK": true,
		"CanHaveEmptyDirectories": false,
		"CaseInsensitive": false,
		"ChangeNotify": false,
		"CleanUp": true,
		"Command": true,
		"Copy": true,
		"DirCacheFlush": false,
		"DirMove": false,
		"Disconnect": false,
		"DuplicateFiles": false,
		"GetTier": true,
		"IsLocal": false,
		"ListR": true,
		"MergeDirs": false,
		"Move": false,
		"OpenWriterAt": false,
		"PublicLink": true,
		"Purge": false,
		"PutStream": true,
		"PutUnchecked": false,
		"ReadMimeType": true,
		"ServerSideAcrossConfigs": false,
		"SetTier": true,
		"SetWrapper": false,
		"SlowHash": false,
		"SlowModTime": true,
		"UnWrap": false,
		"UserInfo": false,
		"WrapFs": false,
		"WriteMimeType": true
	}
}

Where are you getting the restore command existing from?

I show this command on this forum thread : https://forum.rclone.org/t/rclone-settier-fails-with-scaleway-entitytoolarge/17384

And when I type the command rclone backend help scaleway_storage: there is help on the restore command:

### Backend commands

Here are the commands specific to the s3 backend.

Run them with

    rclone backend COMMAND remote:

The help below will explain what arguments each command takes.

See [the "rclone backend" command](/commands/rclone_backend/) for more
info on how to pass options and arguments.

These can be run on a running backend using the rc command
[backend/command](/rc/#backend/command).

#### restore

Restore objects from GLACIER to normal storage

    rclone backend restore remote: [options] [<arguments>+]

This command can be used to restore one or more objects from GLACIER
to normal storage.

Usage Examples:

    rclone backend restore s3:bucket/path/to/object [-o priority=PRIORITY] [-o lifetime=DAYS]
    rclone backend restore s3:bucket/path/to/directory [-o priority=PRIORITY] [-o lifetime=DAYS]
    rclone backend restore s3:bucket [-o priority=PRIORITY] [-o lifetime=DAYS]

This flag also obeys the filters. Test first with -i/--interactive or --dry-run flags

    rclone -i backend restore --include "*.txt" s3:bucket/path -o priority=Standard

All the objects shown will be marked for restore, then

    rclone backend restore --include "*.txt" s3:bucket/path -o priority=Standard

It returns a list of status dictionaries with Remote and Status
keys. The Status will be OK if it was successfull or an error message
if not.

    [
        {
            "Status": "OK",
            "Path": "test.txt"
        },
        {
            "Status": "OK",
            "Path": "test/file4.txt"
        }
    ]



Options:

- "description": The optional description for the job.
- "lifetime": Lifetime of the active copy in days
- "priority": Priority of restore: Standard|Expedited|Bulk

#### list-multipart-uploads

List the unfinished multipart uploads

    rclone backend list-multipart-uploads remote: [options] [<arguments>+]

This command lists the unfinished multipart uploads in JSON format.

    rclone backend list-multipart s3:bucket/path/to/object

It returns a dictionary of buckets with values as lists of unfinished
multipart uploads.

You can call it with no bucket in which case it lists all bucket, with
a bucket or with a bucket and path.

    {
      "rclone": [
        {
          "Initiated": "2020-06-26T14:20:36Z",
          "Initiator": {
            "DisplayName": "XXX",
            "ID": "arn:aws:iam::XXX:user/XXX"
          },
          "Key": "KEY",
          "Owner": {
            "DisplayName": null,
            "ID": "XXX"
          },
          "StorageClass": "STANDARD",
          "UploadId": "XXX"
        }
      ],
      "rclone-1000files": [],
      "rclone-dst": []
    }



#### cleanup

Remove unfinished multipart uploads.

    rclone backend cleanup remote: [options] [<arguments>+]

This command removes unfinished multipart uploads of age greater than
max-age which defaults to 24 hours.

Note that you can use -i/--dry-run with this command to see what it
would do.

    rclone backend cleanup s3:bucket/path/to/object
    rclone backend cleanup -o max-age=7w s3:bucket/path/to/object

Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc.


Options:

- "max-age": Max age of upload to delete

Ah, there we go, it's specific to the S3 remote.

You'd have to restore against that rather than the crypt remote.

Ok, but it's not practical, all my folder/file names are encrypted if I use the Scaleway storage and not the Crypt storage

Unless I'm mistaken (it's happened before and will again), @ncw would have to add that feature into the crypt remote.

You can always open up a feature request on the github and ask for it.

there is a workaround in this post
Can files deleted from the Finder in macOS be recovered in any way? - #7 by AngusMacgyver

Thx @Animosity022 I will open a feature request about that.

Thx @asdffdsa for the workaround. I can have encoded filename and try a backend restore from the s3 storage.

Strangely he doesn't want me to restore an object alone when in the doc it's possible.
rclone backend -i restore scaleway_storage:mybucket/pt8t7ihdnod0dfgbkg1v45n7fc/rsw23bacm95t5d4mortdur4rfc -o priority=Standard
result : 2021/01/11 15:09:38 Failed to backend: is a file not a directory :neutral_face:

never used the command and not sure how to interprate the docs.

not sure an object is, and how ths is differenet from a directory or a file.

i would try --include "filename.txt"

Well done @asdffdsa :wink:

Strange way to do it, but it works well :man_shrugging:

thanks, glad it is working...

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.