Looking for advice for remote - crypt - chunker

I'm interested in using rclone with the command sync (maybe mount later) with layers crypt and chunker on a remote s3-like.

What is the problem you are having with rclone?

I noticed that chunker is beta and norename is experimental.
My remote is s3 like and is reporting error for copy operation (not a detail).
A rclone sync (directly to remote, so without crypt, without chunker) can be done greatly with '--disable copy'.

First I'm interested in a review of my settings. I may have misundertood the documentation. Then I hope a solution to my issue (which is probably not a bug).

provider doesn't support copy and if transaction is rename, il just fail to sync for every file (as expected).
So I tried 'norename', but '--disable copy' and 'norename' are conflicting.
It produce something like 'Failed to create file system for ":chunker:...": can't use chunker on a backend which doesn't support server-side move or copy'
Is it still impossible ?
I believe/hope the norename mode is permitting to remove that restriction (but I may have misundertood that feature).

from crypt doc:

Hashes are not stored for crypt. However the data integrity is protected by an extremely strong crypto authenticator.

I understand that there is no hash feature provided by crypt backend, but a command.
It's nice to have a command, but chunker, will have nothing to check hashes. Am I wrong ?

from chunker doc:

If your storage backend does not support MD5 or SHA1 but you need consistent file hashing, configure chunker with md5all or sha1all.

I believe that hashing is an interesting feature, crypt is not providing any (at least my understanding). So I selected the proposed md5all, in order to have a 'virtual' hash.

Do you have any advice or settings to suggest ?

On another part, I'm facing an error because rclone tried to do a copy operation. which is forbidden by provider.
I was hoping no copy operation, by using --chunker-transactions norename. I may be wrong.
Maybe someone can resolve it ?
2021/04/29 19:43:25 ERROR : ..... : Failed to copy: Put "https...
According to my test, the error is happening on various chunks (not always the same) but sooner or later it fails. a sort of randomness.
Is there any additional flags, that may help you determine the cause ?
As below log is indicating, an other file was transferred in 3 chunks without any error.

I also have a documentation enhancement to suggest (It wasn't obvious to me to get the --disable syntax),
website : Global Flags
content : --disable string Disable a comma separated list of features. Use help to see a list.
ok I will run 'rclone help'
It's not what I want, but at the end there is 'Use "rclone help flags" for to see the global flags.'
ok, let's try 'rclone help flags', oh no the website content
how can I get that syntax ? Please google help me... Documentation
content : To see a list of which features can be disabled use: --disable help
so ok, let's try 'rclone --disable help'

  Is it acceptable to replace ? : Disable a comma separated list of features.  Use help to see a list.
                             by : Disable a comma separated list of features.  Use --disable help to see a list.

I think it may help someone else...

What is your rclone version (output from rclone version)

rclone v1.55.1

  • os/type: linux
  • os/arch: amd64
  • go/version: go1.16.3
  • go/linking: static
  • go/tags: none

Which OS you are using and how many bits (eg Windows 7, 64 bit)

linux debian stable amd64

Which cloud storage system are you using? (eg Google Drive)

s3 provider which doesn't support copy

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone \
    --config /notfound \
    --s3-provider "Other" --s3-access-key-id hidden --s3-secret-access-key hidden --s3-endpoint "s3.eu-west-1.prod.koyeb.com" --s3-server-side-encryption "" --s3-force-path-style=false  \
    --crypt-remote :s3:hidden/test1  --crypt-filename-encryption standard --crypt-directory-name-encryption=true --crypt-password "hiddden" --crypt-password2 "hidden" \
    --chunker-remote :crypt:/chunk26_sm21_test1/ --chunker-chunk-size 26M  --chunker-hash-type md5all  --chunker-transactions norename \
    sync /local/path/20210427_testcrychu/ :chunker:/share_test1/ \
    -vv --bwlimit 90k --max-delete 2 --transfers 1 --checkers 3 --stats 2m \
    --size-only --fast-list --retries 1 --max-transfer 600M --cutoff-mode=soft \
    --delete-after

The rclone config contents with secrets removed.

settings are on the command line

A log from the command with the -vv flag

2021/04/29 19:39:04 NOTICE: Config file "/notfound" not found - using defaults
2021/04/29 19:39:04 INFO  : Starting bandwidth limiter at 90kBytes/s
2021/04/29 19:39:04 DEBUG : rclone: Version "v1.55.1" starting with parameters ["rclone" "--config" "/notfound" "--s3-provider" "Other" "--s3-access-key-id" "hidden" "--s3-secret-access-key" "hidden" "--s3-endpoint" "s3.eu-west-1.prod.koyeb.com" "--s3-server-side-encryption" "" "--s3-force-path-style=false" "--crypt-remote" ":s3:hidden/test1" "--crypt-filename-encryption" "standard" "--crypt-directory-name-encryption=true" "--crypt-password" "hidden" "--crypt-password2" "hidden" "--chunker-remote" ":crypt:/chunk26_sm21_test1/" "--chunker-chunk-size" "26M" "--chunker-hash-type" "md5all" "--chunker-transactions" "norename" "sync" "/local/path/20210427_testcrychu/" ":chunker:/share_test1/" "-vv" "--bwlimit" "90k" "--max-delete" "2" "--transfers" "1" "--checkers" "3" "--stats" "2m" "--size-only" "--fast-list" "--retries" "1" "--max-transfer" "600M" "--cutoff-mode=soft" "--delete-after"]
2021/04/29 19:39:04 DEBUG : Creating backend with remote "/local/path/20210427_testcrychu/"
2021/04/29 19:39:04 DEBUG : Creating backend with remote ":chunker:/share_test1/"
2021/04/29 19:39:04 DEBUG : :chunker: detected overridden config - adding "{9tjr+}" suffix to name
2021/04/29 19:39:04 DEBUG : Creating backend with remote ":crypt:/chunk26_sm21_test1/share_test1"
2021/04/29 19:39:04 DEBUG : :crypt: detected overridden config - adding "{zQg4m}" suffix to name
2021/04/29 19:39:04 DEBUG : Creating backend with remote ":s3:hidden/test1/4i6qm22q206iklfdpv331va5upc0r2ofi6pbp9i7iqolb57jdthg/u8asks889ebmbdee5hgqp3jjdsi4q6858h95r8s7li4cqam2j6dg"
2021/04/29 19:39:04 DEBUG : :s3: detected overridden config - adding "{XRUBU}" suffix to name
2021/04/29 19:39:05 DEBUG : fs cache: renaming cache item ":s3:hidden/test1/4i6qm22q206iklfdpv331va5upc0r2ofi6pbp9i7iqolb57jdthg/u8asks889ebmbdee5hgqp3jjdsi4q6858h95r8s7li4cqam2j6dg" to be canonical ":s3{XRUBU}:hidden/test1/4i6qm22q206iklfdpv331va5upc0r2ofi6pbp9i7iqolb57jdthg/u8asks889ebmbdee5hgqp3jjdsi4q6858h95r8s7li4cqam2j6dg"
2021/04/29 19:39:05 DEBUG : fs cache: switching user supplied name ":s3:hidden/test1/4i6qm22q206iklfdpv331va5upc0r2ofi6pbp9i7iqolb57jdthg/u8asks889ebmbdee5hgqp3jjdsi4q6858h95r8s7li4cqam2j6dg" for canonical name ":s3{XRUBU}:hidden/test1/4i6qm22q206iklfdpv331va5upc0r2ofi6pbp9i7iqolb57jdthg/u8asks889ebmbdee5hgqp3jjdsi4q6858h95r8s7li4cqam2j6dg"
2021/04/29 19:39:05 DEBUG : fs cache: renaming cache item ":crypt:/chunk26_sm21_test1/share_test1" to be canonical ":crypt{zQg4m}:/chunk26_sm21_test1/share_test1"
2021/04/29 19:39:05 DEBUG : Reset feature "ListR"
2021/04/29 19:39:05 DEBUG : fs cache: renaming cache item ":chunker:/share_test1/" to be canonical ":chunker{9tjr+}:/share_test1/"
2021/04/29 19:39:06 DEBUG : l1/01_-_Jean-Michel_Jarre_-_Oxygene_I_(Part_1).flac: skip slow MD5 on source file, hashing in-transit
2021/04/29 19:39:11 DEBUG : l1/L2/03_-_Entends-tu_le_vent_fou__.flac: Sizes identical
2021/04/29 19:39:11 DEBUG : Chunked ':chunker{9tjr+}:/share_muzik_test1/': Waiting for checks to finish
2021/04/29 19:39:11 DEBUG : l1/L2/03_-_Entends-tu_le_vent_fou__.flac: Unchanged skipping
2021/04/29 19:39:11 DEBUG : l1/L2/20_-_Flocon_papillon.flac: Sizes identical
2021/04/29 19:39:11 DEBUG : l1/L2/Chapi_Chapo_et_les_jouets_electroniques_-_Ar_miziou_du_(2007)_-_09_Pendant_ce_temps-la_(feat._Gabin_n_Meven).flac: Sizes identical
2021/04/29 19:39:11 DEBUG : l1/L2/20_-_Flocon_papillon.flac: Unchanged skipping
2021/04/29 19:39:11 DEBUG : l1/L2/Chapi_Chapo_et_les_jouets_electroniques_-_Ar_miziou_du_(2007)_-_09_Pendant_ce_temps-la_(feat._Gabin_n_Meven).flac: Unchanged skipping
2021/04/29 19:39:11 DEBUG : l1/L2/Chapi_Chapo_et_les_jouets_electroniques_-_Ar_miziou_du_(2007)_-_02_Child_of_love_(feat._Tired).flac: Sizes identical
2021/04/29 19:39:11 DEBUG : l1/L2/Chapi_Chapo_et_les_jouets_electroniques_-_Ar_miziou_du_(2007)_-_02_Child_of_love_(feat._Tired).flac: Unchanged skipping
2021/04/29 19:39:11 DEBUG : Chunked ':chunker{9tjr+}:/share_test1/': Waiting for transfers to finish
2021/04/29 19:41:05 INFO  : 
Transferred:   	   10.628M / 341.609 MBytes, 3%, 91.285 kBytes/s, ETA 1h1m52s
Checks:                 4 / 4, 100%
Transferred:            0 / 2, 0%
Elapsed time:       2m0.7s
Transferring:
 * l1/01_-_Jean-Michel_Ja…xygene_I_(Part_1).flac:  3% /279.137M, 88.012k/s, 52m4s

2021/04/29 19:43:05 INFO  : 
Transferred:   	   21.130M / 341.609 MBytes, 6%, 90.450 kBytes/s, ETA 1h28s
Checks:                 4 / 4, 100%
Transferred:            0 / 2, 0%
Elapsed time:       4m0.7s
Transferring:
 * l1/01_-_Jean-Michel_Ja…xygene_I_(Part_1).flac:  7% /279.137M, 90.490k/s, 48m39s

2021/04/29 19:43:25 ERROR : l1/01_-_Jean-Michel_Jarre_-_Oxygene_I_(Part_1).flac: Failed to copy: Put "https://hidden.s3.eu-west-1.prod.koyeb.com/test1/4i6qm22q206iklfdpv331va5upc0r2ofi6pbp9i7iqolb57jdthg/u8asks889ebmbdee5hgqp3jjdsi4q6858h95r8s7li4cqam2j6dg/68b3d7iop82ekq68gcomrau304/66v0bho879ngoukmhbue1cabsflum90cjo007n4masaipqhm5mqfgqml9hjniqae0am3nrhj4it6lb1mof3nkbg3mj8n9ndbv8qla00kk9itb8etfrvd0tvk9po8lj5l?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=hiddenus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210429T173906Z&X-Amz-Expires=900&X-Amz-SignedHeaders=content-type%3Bhost%3Bx-amz-acl%3Bx-amz-meta-mtime&X-Amz-Signature=hidden": http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
2021/04/29 19:43:26 DEBUG : l1/L2/02._Echotides_No._2.flac: skip slow MD5 on source file, hashing in-transit
2021/04/29 19:45:05 INFO  : 
Transferred:   	   31.695M / 85.415 MBytes, 37%, 90.352 kBytes/s, ETA 10m8s
Errors:                 1 (retrying may help)
Checks:                 4 / 4, 100%
Transferred:            0 / 1, 0%
Elapsed time:       6m0.7s
Transferring:
 *                l1/L2/02._Echotides_No._2.flac: 14% /62.472M, 88.107k/s, 10m24s

2021/04/29 19:47:05 INFO  : 
Transferred:   	   42.198M / 85.415 MBytes, 49%, 90.169 kBytes/s, ETA 8m10s
Errors:                 1 (retrying may help)
Checks:                 4 / 4, 100%
Transferred:            0 / 1, 0%
Elapsed time:       8m0.7s
Transferring:
 *                l1/L2/02._Echotides_No._2.flac: 30% /62.472M, 90.591k/s, 8m8s

2021/04/29 19:48:23 DEBUG : l1/L2/02._Echotides_No._2.flac: MD5 = e3a48a0605f450e90675076e9627cb3b OK
2021/04/29 19:49:05 INFO  : 
Transferred:   	   52.763M / 85.415 MBytes, 62%, 90.166 kBytes/s, ETA 6m10s
Errors:                 1 (retrying may help)
Checks:                 4 / 4, 100%
Transferred:            0 / 1, 0%
Elapsed time:      10m0.7s
Transferring:
 *                l1/L2/02._Echotides_No._2.flac: 47% /62.472M, 89.108k/s, 6m15s

2021/04/29 19:51:05 INFO  : 
Transferred:   	   63.266M / 85.415 MBytes, 74%, 90.075 kBytes/s, ETA 4m11s
Errors:                 1 (retrying may help)
Checks:                 4 / 4, 100%
Transferred:            0 / 1, 0%
Elapsed time:      12m0.7s
Transferring:
 *                l1/L2/02._Echotides_No._2.flac: 64% /62.472M, 91.471k/s, 4m7s

2021/04/29 19:53:05 INFO  : 
Transferred:   	   73.831M / 85.415 MBytes, 86%, 90.087 kBytes/s, ETA 2m11s
Errors:                 1 (retrying may help)
Checks:                 4 / 4, 100%
Transferred:            0 / 1, 0%
Elapsed time:      14m0.7s
Transferring:
 *                l1/L2/02._Echotides_No._2.flac: 81% /62.472M, 89.751k/s, 2m12s

2021/04/29 19:53:20 DEBUG : l1/L2/02._Echotides_No._2.flac: MD5 = 4217faea92f065852cbb85a676343a21 OK
2021/04/29 19:55:05 INFO  : 
Transferred:   	   84.333M / 85.415 MBytes, 99%, 90.028 kBytes/s, ETA 12s
Errors:                 1 (retrying may help)
Checks:                 4 / 4, 100%
Transferred:            0 / 1, 0%
Elapsed time:      16m0.7s
Transferring:
 *                l1/L2/02._Echotides_No._2.flac: 98% /62.472M, 88.213k/s, 12s

2021/04/29 19:55:19 DEBUG : l1/L2/02._Echotides_No._2.flac: MD5 = b86498d6c83bf5a14cd9f7c812a94968 OK
2021/04/29 19:55:20 DEBUG : l1/L2/02._Echotides_No._2.flac: MD5 = d08109767c1445ef16ec85d56dcd1dda OK
2021/04/29 19:55:21 DEBUG : l1/L2/02._Echotides_No._2.flac: MD5 = 5e11825c604be497e774ca0f6381b338 OK
2021/04/29 19:55:21 INFO  : l1/L2/02._Echotides_No._2.flac: Copied (new)
2021/04/29 19:55:21 ERROR : Chunked ':chunker{9tjr+}:/share_test1/': not deleting files as there were IO errors
2021/04/29 19:55:21 ERROR : Chunked ':chunker{9tjr+}:/share_test1/': not deleting directories as there were IO errors
2021/04/29 19:55:21 ERROR : Attempt 1/1 failed with 1 errors and: Put "https://hidden.s3.eu-west-1.prod.koyeb.com/test1/4i6qm22q206iklfdpv331va5upc0r2ofi6pbp9i7iqolb57jdthg/u8asks889ebmbdee5hgqp3jjdsi4q6858h95r8s7li4cqam2j6dg/68b3d7iop82ekq68gcomrau304/66v0bho879ngoukmhbue1cabsflum90cjo007n4masaipqhm5mqfgqml9hjniqae0am3nrhj4it6lb1mof3nkbg3mj8n9ndbv8qla00kk9itb8etfrvd0tvk9po8lj5l?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=hiddenus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210429T173906Z&X-Amz-Expires=900&X-Amz-SignedHeaders=content-type%3Bhost%3Bx-amz-acl%3Bx-amz-meta-mtime&X-Amz-Signature=hidden": http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
2021/04/29 19:55:21 INFO  : 
Transferred:   	   85.431M / 85.431 MBytes, 100%, 89.725 kBytes/s, ETA 0s
Errors:                 1 (retrying may help)
Checks:                 4 / 4, 100%
Transferred:            1 / 1, 100%
Elapsed time:     16m16.4s

2021/04/29 19:55:21 DEBUG : 5 go routines active
2021/04/29 19:55:21 Failed to sync: Put "https://hidden.s3.eu-west-1.prod.koyeb.com/crychu2D23_sm21_test1/4i6qm22q206iklfdpv331va5upc0r2ofi6pbp9i7iqolb57jdthg/u8asks889ebmbdee5hgqp3jjdsi4q6858h95r8s7li4cqam2j6dg/68b3d7iop82ekq68gcomrau304/66v0bho879ngoukmhbue1cabsflum90cjo007n4masaipqhm5mqfgqml9hjniqae0am3nrhj4it6lb1mof3nkbg3mj8n9ndbv8qla00kk9itb8etfrvd0tvk9po8lj5l?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=hiddenus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210429T173906Z&X-Amz-Expires=900&X-Amz-SignedHeaders=content-type%3Bhost%3Bx-amz-acl%3Bx-amz-meta-mtime&X-Amz-Signature=hidden": http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""

hi,

  • s3 supports checksums
  • crypt supports checksums
  • chunker supports checksums
  • what is the name of the s3 provider?
  • is there a specifc reason for using the beta chunker, as the largest file in most s3 providers is 5TB?

Hi,

crypt supports checksums

I missed that point.
rclone website indicates :
Hashes are not stored for crypt. However the data integrity is protected by an extremely strong crypto authenticator.
Chunker supports hashsums only when a compatible metadata is present.

I infered that crypt is using hash from underlying backend, is not providing hash to upper backend (at least not one compatible with chunker).
I will test.

what is the name of the s3 provider?

koyeb

is there a specifc reason for using the beta chunker, as the largest file in most s3 providers is 5TB?

The largest file is probably 30MB.
Some files can be bigger, according to pattern not explained.
I put tbz archive which were much bigger.
As explained above, also not supporting copy.

Regards,

koyeb does not offer s3 storage?
so which s3 backend are you using, aws, scaleway, wasabi or what?

Hi
It looks like, they removed s3 storage from their offer.
I created an account in last December.
I set up an S3 storage at that date.
s3 is still functional (with 30MB file size limit and no copy feature).
Regards,

i tried to join koyeb but they are not accepting new users.
i joined their slack chat. but got little to no help.
i was not impressed and would not trust them.

they seem to offer a front-end to other s3 providers
Koyeb - Use Rclone with the Koyeb Serverless Platform to Manage Data Across Cloud Storage Providers

if you are looking for a good s3 providers.
i use the combination of wasabi for hot storage and aws deep glacier for cold storage.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.