Rclone Sync to S3 slow done to transfer errors to crash

I'm syncing a large number of files ( 400GB) to S3 (class DEEP GLACIER). It consists of video files, so sizes are about 30-800MB each. I vave a 1Gb FTT fiber connection to internet. Using the Amazon CLI to sync the files to my s3 bucket the transfer stays stable around 2-2.4 MB/s.
Using rclone it starts also at a high tranfer rate (also 4MB/s) then it starts to slow down till 200 KB/s.
At a certain point also the completely transferred files showing 100% transfers stays there locking the slot and not allowing other files to be tranferred and then some errors start to show in the log. At a certain point the process crashes.
Realizing I'm on a low RAM machine I set the rclone command using a reduced memory setting, but still having the same behaviour.

  • rclone version: v1.53.3-DEV
  • os/arch linux/arm
  • go version: go1.15.5

rclone is installed on a ASUS RT-AC56U router (256 MB ram)

storage system: S3 (class DEEP GLACIER)

=> COMMAND:

#!/bin/sh
echo "----------------------------------------------------------------"
echo "Syncing /mnt/NAS/$1 to s3:$2"
echo "----------------------------------------------------------------"

/opt/bin/rclone sync /mnt/NAS/$1 s3:$2
--log-file /mnt/NAS/linux/rclone_s3_$1.log
--delete-excluded
--size-only
--s3-no-check-bucket
--s3-storage-class DEEP_ARCHIVE
--progress
--verbose
--transfers 2
--s3-chunk-size 32M
--checkers 1
--use-mmap
--filter-from /jffs/myscripts/rclone_s3_filter.txt

=> RCLONE CONFIG:
[s3]
type = s3
provider = AWS
env_auth = false
access_key_id = **************
secret_access_key = ************
region = us-east-2
location_constraint = us-east-2
acl = private
storage_class = DEEP_ARCHIVE

hello and welcome to the forum,

perhaps,

  • reduce --s3-chunk-size, as you raised it above the default value.
  • reduce --s3-upload-concurrency
  • --transfers=1
  • --delete-excluded as per docs "Important this flag is dangerous to your data "
  • --s3-storage-class DEEP_ARCHIVE is not needed, as that is already in the config file.

"some errors start to show in the log"
can you post that, without that, we are kind of guessing as to the real problem.

Hi asdffdsa,
thanks for your suggestion. I had the chance to think about them and really the situation is now way better even if something should be still improved.

I prefer to keep chunk size to 32M because I want to transfer big files and I don' t want to increase the put requests to S3 to don't increase the costs. Anyway I did not realize that eve if I'm tranferring 2 files a time, if it happpens I'm transferring 2 big files, they are split into 32M chunks, so if I don't limit the --s3-upload-concurrency (default 4) in this case I have 32M42 = 256MB memory allocation and it kills my machine. Now reducing --s3-upload-concurrency at least to 2 (but probably I will reduce it to 1) the things are going way better (maximum memory allocation should be now 128MB) and transfer is up since 1h and I got just 2 errors after 35m. Not sure what is this error related to.Following the log in the phase of error.

Regarding the delete-excluded flag it is ok in my case. For the storage class, yes it is already in the config file but this redundancy should not be an issue. Thanks a lot. Need to fix this error and then I can cosider reliable my rclone script to sync my local file to S3 using my ASUS RT-AC56U router, which gave me so many satisfaction in its 6 years working. I used rclone in this years to sync from my NAS to Google Drive, from my pc to local NAS, to Amazon Cloud (until it was possible :frowning: ) and thi great software was really very reliable and effective! Thanks to ncw!

====== partial LOG ====
2021/02/23 23:22:13 DEBUG : 2020_02/20200220-WA0001.mp4: MD5 = e894c88dd5953ccccd8aa5228a32d069 OK
2021/02/23 23:22:13 INFO : 2020_02/20200220-WA0001.mp4: Copied (new)
2021/02/23 23:22:16 DEBUG : 2020_02/20200220-WA0002.mp4: MD5 = dd2eb494d2833b5b44da47e791cd8d11 OK
2021/02/23 23:22:16 INFO : 2020_02/20200220-WA0002.mp4: Copied (new)
2021/02/23 23:22:19 DEBUG : 2020_02/20200220-WA0003.mp4: MD5 = 610d8a9952ac03ff0d168a89b89c89e6 OK
2021/02/23 23:22:19 INFO : 2020_02/20200220-WA0003.mp4: Copied (new)
2021/02/23 23:22:21 DEBUG : 2020_02/20200220-WA0004.mp4: MD5 = e35ea7210d447eb226ba887da58e2f98 OK
2021/02/23 23:22:21 INFO : 2020_02/20200220-WA0004.mp4: Copied (new)
2021/02/23 23:22:23 DEBUG : 2020_02/20200225-WA0008.mp4: MD5 = 3940951da593e2b9c6819f0632018804 OK
2021/02/23 23:22:23 INFO : 2020_02/20200225-WA0008.mp4: Copied (new)
2021/02/23 23:22:26 DEBUG : 2020_02/20200227-WA0013.mp4: MD5 = b7a2e407779e247bcb54c766e2144db1 OK
2021/02/23 23:22:26 INFO : 2020_02/20200227-WA0013.mp4: Copied (new)
2021/02/23 23:22:28 DEBUG : 2020_02/20200214_082723.mp4: multipart upload starting chunk 1 size 32M offset 0/534.006M
2021/02/23 23:22:30 DEBUG : 2020_02/20200227-WA0014.mp4: MD5 = 990e3b0ec9d36a68d15f3f9c6821643e OK
2021/02/23 23:22:30 INFO : 2020_02/20200227-WA0014.mp4: Copied (new)
2021/02/23 23:22:31 DEBUG : 2020_02/20200214_082723.mp4: multipart upload starting chunk 2 size 32M offset 32M/534.006M
2021/02/23 23:22:32 DEBUG : 2020_02/20200227_175439.mp4: MD5 = 472b54d17a0eeccc872a43d99880b383 OK
2021/02/23 23:22:32 INFO : 2020_02/20200227_175439.mp4: Copied (new)
2021/02/23 23:22:35 DEBUG : 2020_02/20200228-WA0001.mp4: MD5 = 9d3b1520a3d9f04e6e6c3efcfae89cfe OK
2021/02/23 23:22:35 INFO : 2020_02/20200228-WA0001.mp4: Copied (new)
2021/02/23 23:22:40 DEBUG : 2020_02/20200228-WA0003.mp4: MD5 = cc1f22b417e22d05a9e7bs7d213b1e61 OK
2021/02/23 23:22:40 INFO : 2020_02/20200228-WA0003.mp4: Copied (new)
2021/02/23 23:22:45 DEBUG : 2020_02/20200228-WA0016.mp4: MD5 = 8f9ec13ecs6a4a9fa6b54c7w27cd74d4 OK
2021/02/23 23:22:45 INFO : 2020_02/20200228-WA0016.mp4: Copied (new)
2021/02/23 23:23:26 DEBUG : 2020_02/20200214_082723.mp4: multipart upload starting chunk 3 size 32M offset 64M/534.006M
2021/02/23 23:23:42 DEBUG : 2020_02/20200214_082723.mp4: multipart upload starting chunk 4 size 32M offset 96M/534.006M
2021/02/23 23:23:57 DEBUG : 2020_02/20200228_183704.mp4: MD5 = 4e0c0d82a69c202509a09fbf28304b17 OK
2021/02/23 23:23:57 INFO : 2020_02/20200228_183704.mp4: Copied (new)
2021/02/23 23:24:23 DEBUG : 2020_02/20200229-WA0001.mp4: MD5 = 1327559dac3604a28c92d874fda5774a OK
2021/02/23 23:24:23 INFO : 2020_02/20200229-WA0001.mp4: Copied (new)
2021/02/23 23:24:28 DEBUG : 2020_02/20200214_082723.mp4: multipart upload starting chunk 5 size 32M offset 128M/534.006M
2021/02/23 23:24:33 DEBUG : 2020_02/20200214_082723.mp4: multipart upload starting chunk 6 size 32M offset 160M/534.006M
2021/02/23 23:24:38 DEBUG : 2020_02/20200229-WA0002.mp4: MD5 = 64bb24392204dde4a8786e9a7cde4400 OK
2021/02/23 23:24:38 INFO : 2020_02/20200229-WA0002.mp4: Copied (new)
2021/02/23 23:26:00 DEBUG : pacer: low level retry 1/10 (error RequestTimeout: Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed.
status code: 400, request id: 1XT0CK08855HWQWQ, host id: SGlEkygVkQ/RyBG6oYZTdBgChoX0wcaYw8G48LkHagWKMe987Fo9g8+booksgdvRlXujG7q+Y=)
2021/02/23 23:26:00 DEBUG : pacer: Rate limited, increasing sleep to 10ms
2021/02/23 23:26:00 DEBUG : pacer: low level retry 1/10 (error RequestTimeout: Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed.
status code: 400, request id: 0VMBRWCGX0N83FX0, host id: vvE5OdwQ/z4BmzvUDwuX2l3o28/Mc1zI9EucZ42CoFKM8fe214uJBA89EU+Ul660EBDMj/74iw=)
2021/02/23 23:26:00 DEBUG : pacer: Rate limited, increasing sleep to 20ms
2021/02/23 23:26:00 DEBUG : pacer: Reducing sleep to 15ms
2021/02/23 23:26:00 ERROR : 2020_02/20200229_123910.mp4: Failed to copy: s3 upload: 400 Bad Request: <?xml version="1.0" encoding="UTF-8"?>
RequestTimeoutYour socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed.DHGA6ASTKXEKZMG3Q/WPm36BHE6jOhSF34WOCQe34Cl0cMnDdMOibne71vRhyf67RT3Na8rX3KulcVai7kZ99Ok4=
2021/02/23 23:26:33 DEBUG : pacer: Reducing sleep to 11.25ms
2021/02/23 23:26:35 DEBUG : pacer: Reducing sleep to 0s
2021/02/23 23:26:35 DEBUG : 2020_02/20200229_124412.mp4: MD5 = 56b7d6d4908c558b167b984a94109ae3 OK
2021/02/23 23:26:35 INFO : 2020_02/20200229_124412.mp4: Copied (new)
2021/02/23 23:26:54 DEBUG : 2020_02/20200214_082723.mp4: multipart upload starting chunk 7 size 32M offset 192M/534.006M
2021/02/23 23:27:06 DEBUG : 2020_02/20200214_082723.mp4: multipart upload starting chunk 8 size 32M offset 224M/534.006M
2021/02/23 23:29:00 DEBUG : 2020_02/20200214_082723.mp4: multipart upload starting chunk 9 size 32M offset 256M/534.006M

good you got it working better.

as for that error, i backup to deep glacier every day, never saw that.
but in my case, a small number of files each time.

so let's wait until the sync is done and see how many errors, what kind of errors.

also, latest stable rclone is 1.54.0, lots of bug fixes

I now reduced --s3-upload-concurrency to 2.
The script run the whole night (around 6 hours). It transferred about 25GB to DEEP Glacier. I got then 5 errors (same type of the one attached to the previous script, for this reason I will not upload a new log). Then I found rclone killed, without completing the upload which was around 40GB. Not sure what this errors are related to.
The version of rclone I'm using in the latest available for my kind of hardware. As soon as the 1.54.0 will be available I will install it

those errors, seem to be from a network issue.

as for rclone being killed, the low amount of memory will do that.

Just to update. I got my working reliable configuration for my low ram system (256MB) Router ASUS RT-AC56U running the Asuswrt Merlin 384.6 firmware to sync from my local NAS to Amazon S3 (DEEP glacier class)

I reduced the transfers to 1. Now my overall memory allocation stays around 200MB. The script run for 2h20m without any errors and transferred around 15GB data.
For my need this is a good result.

Thanks a lot for your support

Transferred: 14.799G / 14.799 GBytes, 100%, 1.807 MBytes/s, ETA 0s
Checks: 4386 / 4386, 100%
Deleted: 30
Transferred: 79 / 79, 100%
Elapsed time: 2h19m58.4s

My final rclone script is the following:

echo "----------------------------------------------------------------"
echo "Syncing /mnt/NAS/$1 to s3:$2"
echo "----------------------------------------------------------------"

mv /mnt/NAS/linux/rclone_s3_$1.log /mnt/NAS/linux/log/rclone_s3_$1_$(date +%F).log
/opt/bin/rclone sync -v /mnt/NAS/$1 s3:$2
--log-file /mnt/NAS/linux/rclone_s3_$1.log
--delete-excluded
--size-only
--s3-no-check-bucket
--s3-storage-class DEEP_ARCHIVE
--transfers 1
--s3-chunk-size 32M
--s3-upload-concurrency 1
--checkers 1
--use-mmap
--filter-from /jffs/myscripts/rclone_s3_filter.txt

1 Like

good, you got it working.

when i use deep glacier, i use copy --immutable, not sync.
sync will delete files in the dest.
if you upload a file today, then delete it tomorrow, aws will charge your for 180 days of storage.

for hot storage i use wasabi, a s3 rclone, that does not charge ingress/egress fees and api calls.
for cold long term storage i use aws s3 deep glacier storage.

I'm using deep glacier as cloud backup for my video files (800GB). I guess the backup model I brought up should correctly work with deep glacier.
My idea is this:

  • I will upload all the files using sync the first time
  • Most of the files will not change.I collected video files for 20 years and I supposed that only for the last years file I could have changes or cancellation
  • I enabled the versioning on deep archive and a rule which cancel old version of files after 181 days.
  • In deep glacier also deleted files are versioned so they are not really deleted but marked as deleted as they were a previous version of files.
  • This policy protect me from accidentally deletion of files at least for 181 days which is acceptable for me. At the same time ensure me Amazon will not charge me for early deletion of files (you know they charge you if you delete a file before 180 days in deep archive)

So far I'm only using cold storage and Amazon looks like the cheapere and reliable solution.
I don't know wasabi but if I should have the need for hto storage probably I'll take a look. They don't have a cold storage solution, do they?

Thanks for your suggestion!

wasabi does not have cold storage.

it is good to have options.

in my case, i upload the most recent backup files including veeam backup files to wasabi.
then in case of a disaster, i can quickly download the data from wasabi, saturating my 1Gbps connection, at no additional cost and without any delay.
with aws deep glacier, the delay can be hours before you can access the data and is very expensive to do.

I guess the bulk recover from deep glacier is very cheap. From my understanding if I need to recover this data just in case of my local hardware failure 1-2 days delay is affordable for me even with 800GB data. Don' t you think so?

sure, for home use, deep glacier is a very good option and what i do at home. even though i have a windows server at home.

well, not in a corporate environment,
if there was a natural disaster or ransomware attack, where i could not get into the building, or run electricity and have no internet for an unknown period of time- i need to get up and running at another location. so i cannot wait for "Bulk retrievals typically complete within 5 – 12 hours."
and then start to download veeam backup files, which could take many hours.
so i keep the most recent full and a week of incrementals at wasabi and then older data goes to aws deep glacier.

Yes, perfectly got your point

Is deep glacier the cheapest long term storage? What are the gotchas?

not sure what the cheapest long term storage is, as some providers claim offer unlimited space but as storage increases, so does the throttling.

last time i checked, aws deep glacier is much cheaper then google cloud.
$1.01/TB/month + api calls.
tho, the data has to stay there for 180 days.
so if you upload a 10GB file, today, then delete it tomorrow, you would be charged pro-rated cost of $10.10.
that is why i is use rclone copy --immutable with a large chunk size.
i do not use sync and do not upload lots of small files, i zip them first.

then there is the cost of retrieval from deep glacier
"For long-term data archiving that is accessed once or twice in a year and can be restored within 12 hours"

Retrieval 1TB time
Expedited $30.72 1 – 5 minutes
Standard $10.24 3 – 5 hours
Bulk $2.56 5 – 12 hours

for my use-cases, wasabi, a s3 rclone for hot storage, and aws s3 deep glacier for cold storage works well.
i need to be proficient in s3 and s3 tools.

1 Like

Thanks for a nice summary. Will think about my archival storage needs.

For deletion you can set directly in AWS the versioning and a policy to completely remove older versions after 180 days. In this case your deletion will be marked as an old version and will not be charged after deletion. Moreover you have a 180 days buffer covering accidental deletion or overwritings. I guess this is a good feature

thanks,
yes, i use the versioning, lifecycle features and require MFA for deletions.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.