I’m using rclone to maximise throughput during DB (~10000 files up to 10Tb) copy on NFS
rclone v1.71.2
- os/version: ubuntu 20.04 (64 bit)
- os/kernel: 5.4.0-216-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.25.3
- go/linking: static
- go/tags: none
I’m using NFS-kernel-server on ubuntu22 mounted with:
rt1:/mnt/disk on /mnt/rt1disk type nfs4 (rw,nosuid,nodev,noexec,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,soft,proto=tcp,timeo=600,retrans=2,sec=sys,local_lock=none,user)
/root/.config/rclone/rclone.conf
checkers=16
transfers=8
[rt1disk]
type = chunker
remote = /mnt/rt1disk
chunk_size = 250M
hash_type = md5quick
this config previously allows me to effectively use all bandwidth during copy:
rclone --metadata copy /mnt/disk/backup/mytestdb rt1disk:mytestdb-202510
, splitting large files to small chunks and transferring it in parallel.
Chunks were combined automatically back to original files and everything was fine
rclone config redacted
[DEFAULT]
# couldn't find type of fs for "DEFAULT"
checkers = 16
transfers = 8
[rt1disk]
type = chunker
remote = /mnt/rt1disk
chunk_size = 512M
hash_type = md5quick
[rt2disk]
type = chunker
remote = /mnt/rt2disk
hash_type = md5quick
[rt3disk]
type = chunker
remote = /mnt/rt3disk
hash_type = md5quick
### Double check the config for sensitive info before posting publicly
rclone -vv copy /mnt/disk/mariadb/gcache/galera.cache rt1disk:mariadb2
Above command leaves chunks in place without merge…
A log from the command that you were trying to run with the -vv flag
2025/11/07 12:17:23 DEBUG : rclone: Version "v1.71.2" starting with parameters ["rclone" "-vv" "copy" "/mnt/disk/mariadb/gcache/galera.cache" "rt1disk:mariadb2"]
2025/11/07 12:17:23 DEBUG : Creating backend with remote "/mnt/disk/mariadb/gcache/galera.cache"
2025/11/07 12:17:23 DEBUG : Using config file from "/root/.config/rclone/rclone.conf"
2025/11/07 12:17:23 DEBUG : fs cache: renaming child cache item "/mnt/disk/mariadb/gcache/galera.cache" to be canonical for parent "/mnt/disk/mariadb/gcache"
2025/11/07 12:17:23 DEBUG : Creating backend with remote "rt1disk:mariadb2"
2025/11/07 12:17:23 DEBUG : Creating backend with remote "/mnt/rt1disk/mariadb2"
2025/11/07 12:17:23 DEBUG : galera.cache: Need to transfer - File not found at Destination
2025/11/07 12:17:23 DEBUG : galera.cache: skip slow MD5 on source file, hashing in-transit
2025/11/07 12:17:23 DEBUG : preAllocate: got error on fallocate, trying combination 1/2: operation not supported
2025/11/07 12:17:25 INFO : galera.cache.rclone_chunk.001_cssy3y: Moved (server-side) to: galera.cache.rclone_chunk.001
2025/11/07 12:17:25 INFO : galera.cache.rclone_chunk.002_cssy3y: Moved (server-side) to: galera.cache.rclone_chunk.002
2025/11/07 12:17:25 INFO : galera.cache: Copied (new)
2025/11/07 12:17:25 INFO :
Transferred: 512.001 MiB / 512.001 MiB, 100%, 498.909 MiB/s, ETA 0s
Checks: 2 / 2, 100%, Listed 0
Renamed: 2
Transferred: 1 / 1, 100%
Server Side Moves: 2 @ 512.001 MiB
Elapsed time: 1.5s
2025/11/07 12:17:25 DEBUG : 3 go routines active
while
``rclone ls rt1disk:mariadb2
536872216 galera.cache```
shows that everything is ok, real directory content looks different:
`` ls -al /mnt/rt1disk/mariadb2
total 524304
drwxr-xr-x 2 root root 4096 Nov 7 12:17 .
drwxr-xr-x 7 root root 4096 Nov 7 12:17 ..
-rw-r--r-- 1 root root 79 Nov 7 10:12 galera.cache
-rw-r--r-- 1 root root 536870912 Nov 7 10:12 galera.cache.rclone_chunk.001
-rw-r--r-- 1 root root 1304 Nov 7 10:12 galera.cache.rclone_chunk.002`
I’ve realize, that rclone chunker seems to be working right, but how do I split large files in pieces to transfer in parallel ?
Thanks in advance