Rclone copy not copying files. Logs ETA - Elapsed time: 7h32m50.5sTransferred: 0 / 0 Byte even after hours

What is the problem you are having with rclone?

Rclone copy not copying files. Logs "ETA - Elapsed time: 7h32m50.5sTransferred: 0 / 0 Byte" even after hours of running rclone.

Run the command 'rclone version' and share the full output of the command.

rclone v1.56.0

  • os/version: redhat 6.10 (64 bit)

  • os/kernel: 2.6.32-754.35.1.el6.x86_64 (x86_64)

  • os/type: linux

  • os/arch: amd64

  • go/version: go1.16.5

  • go/linking: static

  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Object Storage to AWS S3

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone: Version "v1.56.0" starting with parameters ["rclone" "copy" "-vv" "-P" "--progress" "ooss:FS2_FILEVAULT_PROD" "fds-s3:fds-document-bucket-prod-827397803982-48/8889" "--log-file=01-31-22.log" "--bwlimit=0" "--buffer-size=128M" "--fast-list" "--transfers=64" "--s3-disable-checksum"]

The rclone config contents with secrets removed.

$ cat rclone.conf

[fds-s3]

type = s3

provider = AWS

env_auth = true

access_key_id = **************

secret_access_key = ************

region = us-east-1

acl = bucket-owner-full-control

storage_class = STANDARD

[ooss]

type = s3

provider = Other

env_auth = true

access_key_id = ************

secret_access_key = ***********

endpoint = ********.com

acl = bucket-owner-full-control

A log from the command with the -vv flag

$ cat 01-31-22.log

2022/01/31 04:50:12 DEBUG : rclone: Version "v1.56.0" starting with parameters ["rclone" "copy" "-vv" "-P" "--progress" "ooss:FS2_FILEVAULT_PROD" "fds-s3:fds-document-bucket-prod-827397803982-48/8889" "--log-file=01-31-22.log" "--bwlimit=0" "--buffer-size=128M" "--fast-list" "--transfers=64" "--s3-disable-checksum"]

2022/01/31 04:50:12 DEBUG : Creating backend with remote "ooss:FS2_FILEVAULT_PROD"

2022/01/31 04:50:12 DEBUG : Using config file from "/home/bconnect/.config/rclone/rclone.conf"

2022/01/31 04:50:12 DEBUG : ooss: detected overridden config - adding "{1SSjr}" suffix to name

2022/01/31 04:50:12 DEBUG : fs cache: renaming cache item "ooss:FS2_FILEVAULT_PROD" to be canonical "ooss{1SSjr}:FS2_FILEVAULT_PROD"

2022/01/31 04:50:12 DEBUG : Creating backend with remote "fds-s3:fds-document-bucket-prod-827397803982-48/8889"

2022/01/31 04:50:12 DEBUG : fds-s3: detected overridden config - adding "{1SSjr}" suffix to name

2022/01/31 04:50:12 DEBUG : fs cache: renaming cache item "fds-s3:fds-document-bucket-prod-827397803982-48/8889" to be canonical "fds-s3{1SSjr}:fds-document-bucket-prod-827397803982-48/8889"

Is that the whole logfile? It's only a few seconds.

Nothing else is logged. That is the full log.

Is it doing any CPU / anything on the system? I'd kill it and restart as somethings seems like it is not running.

the command should start quickly, as normally with s3, rclone has to calculate the hash before the upload starts.
but you have disabled that.

perhaps update to latest stable v1.57.0

Did that almost 5 times in 3 different systems :slight_smile:

Thanks, I will try again after the upgrade. It worked very well until last week. We have copied millions of files.

What is the size of the files in the bucket? Is it one big thing?

there are around 35 million files in source of various sizes, ranging from 15KB to 50MB.

So one directory with 35 million files?

Folder with millions of files - Help and Support - rclone forum

Recommendations for using rclone with a minio 10M+ files - Help and Support - rclone forum

if the source and dest are both s3, then. imho, should not use --s3-disable-checksum.

use --checksum, as per the docs
"If the source and destination are both S3 this is the recommended flag to use for maximum efficiency."

OP probably not even that far.

You get 1000 per chunk list per @ncw other post with AWS S3.

Assuming 1s per 1000 which I'm not sure how valid that is, it would take about 9.7 hours to get a listing of 35 million files.

not an exact comparison, but for a set of 1,000,000 files,
rclone sync /local wasabi: --dry-run took just 26 seconds

https://forum.rclone.org/t/fastest-way-to-check-for-changes-in-2-5-million-files/25957/19

It's not even in the same ballpark.

OP has 35 times the files in the same directory and I don't think it's not a linear curve on getting files and it gets slower as it builds.

well, you asked So one directory with 35 million files? and the OP has not answered yet.

just pointing out that scanning 1,000,000 files, on s3, can take just 26 seconds.
maybe aws s3 is super slow compared to wasabi.

Correct, one directory with 35million files.

The command ran for more than 7hrs, staying at 0/0 Bytes transferred. Should I wait longer than that for transfer to start?

to get a deeper look into what is going on,
--dump=headers --retries=1 --low-level-retries=1 --log-level=DEBUG --log-file=rclone.log

and did you see my suggestion,
as per the rclone docs, if the source and dest are both s3, to use --checksum

yes, thank you!
I will re-try with the suggested flags, including checksum and post the results here.

And if you agave the time, for curiosity, let it run for like 10-12 hours.