I have this command which I use to transfert data to a S3 deep glacier:
rclone sync "D:\heavy folder" "aws":"heavy folder" --log-file %logpath%" --log-level INFO --delete-after --copy-links --stats-one-line -P --stats 5s
Because on S3 deep glacier, storage cost is very low but request are expensive, I am trying to reduce the number of requests. I am already sending my heavy folder in small chuncks to reduce the number of list/get requests each time I restart the sync (the full sync will takes many weeks and I need to switch off my computer daily).
But I wonder if there is other things I can do to reduce my number of requests.
So I wonder: does displaying the stats with
--stats-one-line -P --stats 5s will generate requests or are they calculated locally?
And what about using a log-file?
Any other tips?
Thanks a lot!
I think I answered this in the other thread, but in summary
They are calculated locally and a log file won't affect the number of requests.
Thanks a lot! That's exactly the information I was looking for.
I looked into
--fast-list in the documentation and it's written to use only
--fast-list if you can fit your entire sync listing into memory. I search in the forum but I only saw some mentions but not confirmation so my question is: is this memory the RAM of my computer (in this case, if my RAM is 16gb and my sync is 24GB then I will not work)?
For those who want to understand what
--size-only: https://rclone.org/docs/#fast-list and https://rclone.org/docs/#size-only
No, it's just the file and directory structure that is kept in memory so you should be fine.
This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.