Give it time. It will be completed (given you have enough RAM to store listing). How long it takes to list and start transferring 5m objects depends on your setup - can be hours easy.
See similar subject discussions:
This is rclone doing HEAD requests to read the modtime most likely.
You can stop it doing this with the --size-only or the --checksum flags and the sync should start much quicker.
Are a great number of those 80 million files in the same directory? That is what your out of memory makes me think.
The problem is big syncs with millions of files in one directory. Rclone syncs on a directory by directory basis so you can have 10,000,000 directories with 1,000 files in and it will sync fine, but …
STOP and READ USE THIS TEMPLATE NO EXCEPTIONS - By not using this, you waste your time, our time and really hate puppies. Please remove these two lines and that will confirm you have read them.
What is the problem you are having with rclone?
I'm trying to start a dryrun on a copy for a big S3 bucket (44TB and 44,5 million objects) and the command I ran stood in index/pre-copy for 58h's before I aborted it
Run the command 'rclone version' and share the full output of the command.
rclone v1.…