With rclone copy --ignore-existing, existing files are also getting copied

I’m copying data from one Remote(Google Drive) to another one(AWS S3).

rclone copy --ignore-existing -v GoogleDriveRemote:Files S3Remote:MyBucket/Backup-Documents
My source Google Drive has recursive type folder structure, like, Folder(s)>Folder(s)>Folder(s)>>File(s).
Before putting this into place, I tested the above command and I got what I needed, ie. Copying only new files from source to destination and keeping the existing files(on destination) as they are.
But now, when I ran this command second or third time, it is showing:
2017/06/19 02:01:00 INFO : xxxxxxxxx:Copied (new)
for the file, it has already copied last time.
Also, the speed is very slow, starting with as low as 481 Bytes/s

2017/06/19 02:01:04 INFO :
Transferred: 29.437 kBytes (481 Bytes/s)
Errors: 0
Checks: 19
Transferred: 1
Elapsed time: 1m2.5s

Transferred: 99.646 GBytes (327.457 kBytes/s)
Errors: 118
Checks: 432301
Transferred: 376146
Elapsed time: 88h38m2.5s

It is taking too much time. Here Transferred is showing almost 100 GB, but only around 40 GB of files got copied.
I have gone through https://github.com/ncw/rclone/issues/517 but did not understand thoroughly.

I’m running the cron-job including the rclone copy on Ubuntu 14.04 64-bit on EC2 instance.

Any help is highly appreciated. Thank you.

Check google drive for duplicates using rclone dedupe GoogleDriveRemote:Files - that is likely the problem.

Dedupe will let you fix the duplicates also - see the docs.

If you want it to go faster try increasing --checkers. If you use --checksum or --size-only it will run much faster as it doesn’t have to do another HTTP query on S3 to check the modtime.

Also try the latest beta. If you have enough memory then you can use --fast-list which will save you on S3 transactions and may or may not be faster.

Thank You @ncw for the reply, I crosschecked for duplicates but it is not there.
The newly noticed issue is, it is copying the same file multiple times. I confirmed this in S3, the file’s last modified time was continuously changing. I have attached a screenshot regarding this, please have a look.

Another one is that after the third attempt, the copying operation is getting terminated. Following is the fragments of output:

2017/06/29 16:10:48 ERROR : Attempt 1/3 failed with 206 errors and: object not found
2017/06/29 16:11:03 INFO  : 
Transferred:   36.481 GBytes (325.942 kBytes/s)
Errors:                 0
Checks:            146481
Transferred:       137985
Elapsed time:  32h36m1.8s

…

2017/06/30 08:48:55 ERROR : Attempt 2/3 failed with 81 errors and: corrupted on transfer: sizes differ 332131 vs 328372
2017/06/30 08:49:03 INFO  : 
Transferred:   71.730 GBytes (424.357 kBytes/s)
Errors:                 0
Checks:            299671
Transferred:       271569
Elapsed time:  49h14m1.8s

…

2017/07/01 00:55:43 ERROR: Attempt 3/3 failed with 75 errors and: corrupted on transfer: sizes differ 73276 vs 53437
2017/07/01 00:55:43 Failed to copy: corrupted on transfer: sizes differ 73276 vs 53437

Just to be clear, you ran rclone dedupe on google drive?

And are you trying this with the latest beta?

If yes to both of those then please make a new issue on github.