What is the problem you are having with rclone?
I use rclone move
to transfer files every hour from a server to Wasabi (S3) with the following command:
rclone move $src $target --include="*.csv.gz" -v --no-traverse
Lately, I have been contacted by Wasabi support that my egress volume is too high. However, I barely download anything from Wasabi. When I enabled logging on Wasabi, I see a huge number of requests like:
... [29/Sep/2020:05:25:03 +0000] ... REST.GET.BUCKET - "GET https://s3.eu-central-1.wasabisys.com/...?delimiter=...&max-keys=1024&prefix=..." 200 - - 0 18 0 "" "rclone/v1.36" -
One response I checked was 0.2mb. The directories contain thousands of files.
Yesterday, I have added --no-traverse
, but it does not seem to make a difference.
I am now looking for a solution which simply moves the files which match the given pattern to Wasabi without listing or checking anything. If I do not find a solution within the next ~36 hours, Wasabi will disable access to my data.
What is your rclone version (output from rclone version
)
rclone v1.36
Which OS you are using and how many bits (eg Windows 7, 64 bit)
Ubuntu 18.04.3 LTS
Which cloud storage system are you using? (eg Google Drive)
Wasabi
The command you were trying to run (eg rclone copy /tmp remote:tmp
)
rclone move $src $target --include="*.csv.gz" -v --no-traverse
The rclone config contents with secrets removed.
[wasabi]
type = s3
env_auth = false
access_key_id = ...
secret_access_key = ...
region = eu-central-1
endpoint = s3.eu-central-1.wasabisys.com
location_constraint =
acl =
server_side_encryption =
A log from the command with the -vv
flag
It's a bit difficult to provide this right now. However, I can provide it later if necessary.