What is the problem you are having with rclone?
It's using too much memory - then Linux kills it (around 16 GB, what the server has)
Note that the maximum number of files in a single directory is 70000.
This is for a directory like:
dir1/dir2/dir3 (with 70000 files here)
Obviously dir1/dir2 contains more files because there are directories like:
dir1/dir2/dir3 (70000 files)
dir1/dir2/dir3-2 (with 50000 files or whatever)
I've read that rclone reads all the files of a directory, I'm assuming that in a flat not with its subdirectories (or "/" would be problematic...)
Also, I've tried with --buffer-size 100M but I think that this is for VFS not for copying from bucket to bucket (same with other --vfs options) (I've also tried --buffer-size 16 as well, same problem)
Some files might be big but I guess that rclone is not reading the full files in memory before sending them to the destination bucket. No fuse is involved here.
I'm going to try now with --s3-upload-concurrency and see what happens - but regardless of this, any other suggestion? I'll update the post in half an hour (it usually fails in 10 min with the current set of files)
What is your rclone version (output from rclone version
)
rclone v1.50.2
- os/arch: linux/amd64
- go version: go1.13.4
Which OS you are using and how many bits (eg Windows 7, 64 bit)
Red Hat Enterprise Linux Server release 7.7 (Maipo)
64 bit
Which cloud storage system are you using? (eg Google Drive)
Reading from an S3 alike (Scality) copying to Amazon S3 with DEEP_ARCHIVE backend
The command you were trying to run (eg rclone copy /tmp remote:tmp
)
rclone copy --bwlimit 20M --config a_config_file --verbose bucket1-readonly://the_bucket_name aws-readwrite:data/
A log from the command with the -vv
flag (eg output from rclone -vv copy /tmp remote:tmp
)
Full log below.
The output says:
fatal error: runtime: out of memory
Not now but earlier when I had this running I've seen in the logs:
Dec 27 00:02:19 servername kernel: Killed process 57456 (rclone), UID 1000, total-vm:43916380kB, anon-rss:14995024kB, file-rss:0kB, shmem-rss:0kB
Log:
Transferred: 9.300G / 665.502 GBytes, 1%, 19.849 MBytes/s, ETA 9h24m12s
Errors: 0
Checks: 3349 / 3349, 100%
Transferred: 91 / 10103, 1%
Elapsed time: 7m59.7s
Transferring:
- data_admin/servers/ace…13-20-10-roundcube.sql: 80% /6.492M, 325.658k/s, 3s
- data_admin/servers/ace…2-11-20-10-ace2016.sql:100% /1.781G, 2.175M/s, 0s
- data_admin/servers/ace…2-12-20-10-ace2016.sql: 45% /1.801G, 9.383M/s, 1m48s
- data_admin/servers/ace…2-13-20-10-ace2016.sql: 14% /1.819G, 7.940M/s, 3m20s
2019/12/27 22:55:34 DEBUG : data_admin/servers/ace-intranet/2017-02-13-20-10-roundcube.sql: MD5 = 7004d8342897e10350c966c0c01e4e67 OK
2019/12/27 22:55:34 INFO : data_admin/servers/ace-intranet/2017-02-13-20-10-roundcube.sql: Copied (new)
fatal error: runtime: out of memory
runtime stack:
runtime.throw(0x168c51d, 0x16)
/opt/hostedtoolcache/go/1.13.4/x64/src/runtime/panic.go:774 +0x72
runtime.sysMap(0xc78c000000, 0x140000000, 0x2410438)
/opt/hostedtoolcache/go/1.13.4/x64/src/runtime/mem_linux.go:169 +0xc5
runtime.(*mheap).sysAlloc(0x23f76a0, 0x140000000, 0xc000000001, 0x3004372a7)
/opt/hostedtoolcache/go/1.13.4/x64/src/runtime/malloc.go:701 +0x1cd
runtime.(*mheap).grow(0x23f76a0, 0xa0000, 0xffffffff)
/opt/hostedtoolcache/go/1.13.4/x64/src/runtime/mheap.go:1255 +0xa3
runtime.(*mheap).allocSpanLocked(0x23f76a0, 0xa0000, 0x2410448, 0x1c82ea0)
/opt/hostedtoolcache/go/1.13.4/x64/src/runtime/mheap.go:1170 +0x266
runtime.(*mheap).alloc_m(0x23f76a0, 0xa0000, 0x101, 0xc000103f20)
/opt/hostedtoolcache/go/1.13.4/x64/src/runtime/mheap.go:1022 +0xc2
runtime.(*mheap).alloc.func1()
/opt/hostedtoolcache/go/1.13.4/x64/src/runtime/mheap.go:1093 +0x4c
runtime.(*mheap).alloc(0x23f76a0, 0xa0000, 0xc002010101, 0xc000150a80)
/opt/hostedtoolcache/go/1.13.4/x64/src/runtime/mheap.go:1092 +0x8a
runtime.largeAlloc(0x140000000, 0x101, 0xc0001a7500)
/opt/hostedtoolcache/go/1.13.4/x64/src/runtime/malloc.go:1138 +0x97
runtime.mallocgc.func1()
/opt/hostedtoolcache/go/1.13.4/x64/src/runtime/malloc.go:1033 +0x46
runtime.systemstack(0x0)
/opt/hostedtoolcache/go/1.13.4/x64/src/runtime/asm_amd64.s:370 +0x66
runtime.mstart()
/opt/hostedtoolcache/go/1.13.4/x64/src/runtime/proc.go:1146
goroutine 33 [running]:
runtime.systemstack_switch()
/opt/hostedtoolcache/go/1.13.4/x64/src/runtime/asm_amd64.s:330 fp=0xc00093b210 sp=0xc00093b208 pc=0x45d150
runtime.mallocgc(0x140000000, 0x138e9c0, 0xc0000c9501, 0x0)
/opt/hostedtoolcache/go/1.13.4/x64/src/runtime/malloc.go:1032 +0x895 fp=0xc00093b2b0 sp=0xc00093b210 pc=0x40c8e5
runtime.makeslice(0x138e9c0, 0x140000000, 0x140000000, 0xc0000c9500)
/opt/hostedtoolcache/go/1.13.4/x64/src/runtime/slice.go:49 +0x6c fp=0xc00093b2e0 sp=0xc00093b2b0 pc=0x445ccc
github.com/rclone/rclone/vendor/github.com/aws/aws-sdk-go/service/s3/s3manager.(*uploader).init.func1(0xc002bcf650, 0x0)
/home/runner/work/rclone/src/github.com/rclone/rclone/vendor/github.com/aws/aws-sdk-go/service/s3/s3manager/upload.go:400 +0x44 fp=0xc00093b318 sp=0xc00093b2e0 pc=0x1059724
sync.(*Pool).Get(0xc002bcf650, 0x19485c0, 0xc000494a90)
/opt/hostedtoolcache/go/1.13.4/x64/src/sync/pool.go:148 +0xa6 fp=0xc00093b360 sp=0xc00093b318 pc=0x46e856
github.com/rclone/rclone/vendor/github.com/aws/aws-sdk-go/service/s3/s3manager.(*uploader).nextReader(0xc002bcf5e0, 0x0, 0x0, 0x8, 0x8, 0xc001de80f8, 0x10, 0x18, 0xc00006bc00)
/home/runner/work/rclone/src/github.com/rclone/rclone/vendor/github.com/aws/aws-sdk-go/service/s3/s3manager/upload.go:461 +0x66 fp=0xc00093b420 sp=0xc00093b360 pc=0x1057636