I have the same problem that it has been already reported several times... Yet may-be a workaround exist that I am not aware:
I am trying to sync a S3 bucket to a loacl SSD. Problem is that there are millions of files (about 4 Millions) in root remote directory. RAM is growing and about 1 hour after command launch, the process is killed.
What is the way to synchronize in such situation (ton of files in 1 directory)
thanks
Run the command 'rclone version' and share the full output of the command.
rclone v1.61.1
os/version: ubuntu 20.04 (64 bit)
os/kernel: 4.4.180+ (x86_64)
os/type: linux
os/arch: amd64
go/version: go1.19.4
go/linking: static
go/tags: none
Which cloud storage system are you using? (eg Google Drive)
S3 (remote) - SSD (local)
The command you were trying to run (eg rclone copy /tmp remote:tmp)
I think this command can be optimized because it is currently taking hours to "copy" less than 400 small files.
Isn't any flag that can be used to optimized it?
--- hard to be sure, without seeing the debug log.
--- perhaps split to-transfer into smaller files, run rclone copy against that.
--- perhaps do not use --no-traverse
"if you are copying a large number of files, especially if you are doing a copy where lots of the files under consideration haven't changed and won't need copying then you shouldn't use --no-traverse."