Hi, I’m the author of https://natalian.org/2018/01/27/S3_versus_Spaces/ and I’m simply trying to mirror my authoratative S3 bucket to Digital Ocean’s “Space” with rclone sync --fast-list -v s3:s.natalian.org spaces:natalian.
It’s never completed without error. After an hour it usually exits like so:
2018/02/28 14:50:29 INFO : S3 bucket natalian: Waiting for checks to finish
2018/02/28 14:50:30 INFO : S3 bucket natalian: Waiting for transfers to finish
2018/02/28 14:50:30 ERROR : S3 bucket natalian: not deleting files as there were IO errors
2018/02/28 14:50:30 ERROR : S3 bucket natalian: not deleting directories as there were IO errors
2018/02/28 14:50:30 ERROR : Attempt 3/3 failed with 1 errors and: corrupted on transfer: sizes differ 361475 vs 5487094
2018/02/28 14:50:30 Failed to sync: corrupted on transfer: sizes differ 361475 vs 5487094
Any ideas how to proceed? I’ve retried the command about 10 times over the space of two weeks. I don’t have a good idea if it’s working through the issues.
I’m an Archlinux user.
rclone v1.39
- os/arch: linux/amd64
- go version: go1.9.2
Once the mirror is complete, I will hopefully setup a service to run rclone based on a YYYY-MM prefix every day, to make sure the mirror is good.
Since I’m having so much difficulty mirroring my data, I’m wondering how other businesses backup their data amongst cloud services !!
If you could make a log with -vv and email it to me at nick@craig-wood.com then I’ll take a look. Can you put a link to this thread in the email so I can keep it straight!
DO put some files in your first bucket for you - rclone deleted those as they didn’t exist on the source and you used sync. Use copy if you don’t want that.
That is only a DEBUG message and it means “16 go routines at exit” . You won’t see that with -v.
I’ll re-word it since you aren’t the first person to have mis-read it.
I was trying to replicate this problem to report it to digital ocean but I couldn’t… What is in that 2013-07-30/encode.log.gz file? Any chance you could email it to me at nick@craig-wood.com so I can experiment further?
I don’t have any problems uploading that file, and it looks like you didn’t either in your no-gzip log.
No, but you can set an environment variable, eg export RCLONE_NO_GZIP_ENCODING=true
A daily cron job is what most people do. Though I expect you could be cleverer if you knew something about S3 events…
Note that if you use --checksum it will likely run faster as the default modtime uses an extra transaction, so I’d expect the check with --fast-list to be extremely quick!