Chunker is NOT transferring chunks in parallel?

What is the problem you are having with rclone?

I had been hoping to use chunker to speed up transfer of large files through multithreading but in my test Chunker does chunk the files but only transfer a single chunk after each other (expected: 4 threads, --transfers 4).

Is there a trick/parameter I'm missing or is chunker unable to transfer several chunks in parallel?

Run the command 'rclone version' and share the full output of the command.

rclone v1.67.0

  • os/version: Microsoft Windows 10 Pro 22H2 (64 bit)
  • os/kernel: 10.0.19045.3031 (x86_64)
  • os/type: windows
  • os/arch: amd64
  • go/version: go1.22.4
  • go/linking: static
  • go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

  • pCloud
  • Koofr
  • OneDrive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone sync S:\tst pcef-chunker:/tst --transfers 4 -vv

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

[pcloud]
type = pcloud
hostname = eapi.pcloud.com
token = XXX

[pcloud-enc-full]
type = crypt
remote = pcloud:/CT
password = XXX
password2 = XXX

[pcef-chunker]
type = chunker
remote = pcloud-enc-full:/
hash_type = sha1all

A log from the command that you were trying to run with the -vv flag

C:\>rclone sync S:\tst pcef-chunker:/tst --transfers 4 -vv
2024/07/06 10:09:27 DEBUG : Setting --transfers "8" from environment variable RCLONE_TRANSFERS="8"
2024/07/06 10:09:27 DEBUG : Setting --retries "3" from environment variable RCLONE_RETRIES="3"
2024/07/06 10:09:27 DEBUG : Setting --order-by "size,mixed,50" from environment variable RCLONE_ORDER_BY="size,mixed,50"
2024/07/06 10:09:27 DEBUG : rclone: Version "v1.67.0" starting with parameters ["rclone" "sync" "S:\\tst" "pcef-chunker:/tst" "--transfers" "4" "-vv"]
2024/07/06 10:09:27 DEBUG : Creating backend with remote "S:\\tst"
2024/07/06 10:09:27 DEBUG : Using config file from "C:\\[user]s\\[user]\\AppData\\Roaming\\rclone\\rclone.conf"
2024/07/06 10:09:27 DEBUG : fs cache: renaming cache item "S:\\tst" to be canonical "//?/S:/tst"
2024/07/06 10:09:27 DEBUG : Creating backend with remote "pcef-chunker:/tst"
2024/07/06 10:09:27 DEBUG : Creating backend with remote "pcloud-enc-full:/tst"
2024/07/06 10:09:27 DEBUG : Creating backend with remote "pcloud:/CT/f3bjm4jslgkj2ppibmem7o3onc"
2024/07/06 10:09:27 DEBUG : fs cache: renaming cache item "pcloud:/CT/f3bjm4jslgkj2ppibmem7o3onc" to be canonical "pcloud:CT/f3bjm4jslgkj2ppibmem7o3onc"
2024/07/06 10:09:27 DEBUG : fs cache: switching user supplied name "pcloud:/CT/f3bjm4jslgkj2ppibmem7o3onc" for canonical name "pcloud:CT/f3bjm4jslgkj2ppibmem7o3onc"
2024/07/06 10:09:27 DEBUG : Reset feature "ListR"
2024/07/06 10:09:27 DEBUG : hugefile.log: Need to transfer - File not found at Destination
2024/07/06 10:09:27 DEBUG : hugefile.notes: Need to transfer - File not found at Destination
2024/07/06 10:09:27 DEBUG : hugefile.zst: Need to transfer - File not found at Destination
2024/07/06 10:09:27 DEBUG : Chunked 'pcef-chunker:/tst': Waiting for checks to finish
2024/07/06 10:09:27 DEBUG : Chunked 'pcef-chunker:/tst': Waiting for transfers to finish
2024/07/06 10:09:27 DEBUG : hugefile.log: skip slow SHA1 on source file, hashing in-transit
2024/07/06 10:09:27 DEBUG : hugefile.zst: skip slow SHA1 on source file, hashing in-transit
2024/07/06 10:09:27 DEBUG : hugefile.notes: skip slow SHA1 on source file, hashing in-transit
2024/07/06 10:09:27 DEBUG : hugefile.log: sha1 = 4a975da37f158eb8dfd6f3a11b3ef6df8fd9c229 OK
2024/07/06 10:09:28 INFO  : hugefile.log.rclone_chunk.001_75ivty: Moved (server-side) to: hugefile.log.rclone_chunk.001
2024/07/06 10:09:28 DEBUG : hugefile.notes: sha1 = 9729e418d8a12d5cf942d1217fbf076993e42413 OK
2024/07/06 10:09:28 DEBUG : hugefile.log: sha1 = 98580bd2a5c44c032d8d26374d4b2f2013608f3d OK
2024/07/06 10:09:28 INFO  : hugefile.notes.rclone_chunk.001_75iv4u: Moved (server-side) to: hugefile.notes.rclone_chunk.001
2024/07/06 10:09:29 DEBUG : hugefile.notes: sha1 = 0ed8a5d526cc3017e70e0224046265e87bb69ce7 OK
2024/07/06 10:09:29 DEBUG : Couldn't parse Date: from server edef2.pcloud.com: "Sat, 06 Jul 2024 08:09:29 +0000": parsing time "Sat, 06 Jul 2024 08:09:29 +0000" as "Mon Jan _2 15:04:05 2006": cannot parse ", 06 Jul 2024 08:09:29 +0000" as " "
2024/07/06 10:09:29 DEBUG : hugefile.log: sha1 = 3cbddb46ae178546c00aaff7ea9206bf582bd205 OK
2024/07/06 10:09:29 INFO  : hugefile.log: Copied (new)
2024/07/06 10:09:29 DEBUG : Couldn't parse Date: from server edef8.pcloud.com: "Sat, 06 Jul 2024 08:09:29 +0000": parsing time "Sat, 06 Jul 2024 08:09:29 +0000" as "Mon Jan _2 15:04:05 2006": cannot parse ", 06 Jul 2024 08:09:29 +0000" as " "
2024/07/06 10:09:29 DEBUG : hugefile.notes: sha1 = 0af3fa8cc2e3497461360b002549f2da15595baa OK
2024/07/06 10:09:29 INFO  : hugefile.notes: Copied (new)
2024/07/06 10:10:27 INFO  :
Transferred:      114.815 MiB / 29.738 GiB, 0%, 1.946 MiB/s, ETA 4h19m52s
Checks:                 2 / 2, 100%
Renamed:                2
Transferred:            2 / 3, 67%
Server Side Moves:      2 @ 6.107 KiB
Elapsed time:       1m0.3s
Transferring:
 *                                  hugefile.zst:  0% /29.738Gi, 1.946Mi/s, 4h19m50s

2024/07/06 10:11:27 INFO  :
Transferred:      241.377 MiB / 29.738 GiB, 1%, 2.126 MiB/s, ETA 3h56m53s
Checks:                 2 / 2, 100%
Renamed:                2
Transferred:            2 / 3, 67%
Server Side Moves:      2 @ 6.107 KiB
Elapsed time:       2m0.3s
Transferring:
 *                                  hugefile.zst:  0% /29.738Gi, 2.126Mi/s, 3h56m53s

2024/07/06 10:12:27 INFO  :
Transferred:      366.533 MiB / 29.738 GiB, 1%, 2.139 MiB/s, ETA 3h54m25s
Checks:                 2 / 2, 100%
Renamed:                2
Transferred:            2 / 3, 67%
Server Side Moves:      2 @ 6.107 KiB
Elapsed time:       3m0.3s
Transferring:
 *                                  hugefile.zst:  1% /29.738Gi, 2.139Mi/s, 3h54m23s

I believe this is currently not supported by chunker backend. Lot of limitations where it could not be supported (if chunker does not know how big the file is etc.) So not sure if it can even be supported.

You could do a local chunker and then upload the chunked parts, then you would be able to upload the files in parallel if you got the additional local space.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.