I'm using the latest release.
One HEAD request is to see whether the file already exists (because of the
Is it possible to avoid this request and just copy the file (for new and modified files) to the bucket without checking the existence?
The script for the daily backup I've changed to:
/usr/bin/rclone copy "/volume1/myfolder" "AmazonS3DeepGlacier:mybucket/myfolder" --max-age 24h --no-traverse --ignore-times --exclude "#recycle/**" --exclude "@eaDir/**" -v --config="/var/services/homes/admin/.config/rclone/rclone.conf" --track-renames
The script that runs monthly is:
/usr/bin/rclone sync "/volume1/myfolder" "AmazonS3DeepGlacier:mybucket/myfolder" --fast-list --exclude "#recycle/**" --exclude "@eaDir/**" -v --config="/var/services/homes/admin/.config/rclone/rclone.conf" --checksum --track-renames
One folder I backup to Amazon S3 bucket has 1224 files and 124 directories. (388 MB)
The other folder has 238508 files and 52433 directories. (40 GB)
Everyday only a few files (let us say 10 files) are changed or added.
Because Amazon charges for requests I would like to limit the requests as much as possible.
If I have for example 5 new files I would like to see only 5 PUT requests. Maybe also 5 HEAD requests to confirm if the files are properly uploaded. But in my case these 5 HEAD requests are not necessary because I would check the whole folder with the monthly script. So is it possible to avoid all the HEAD requests?