Huge download data volume due to directory listing despite --no-traverse when rclone move

What is the problem you are having with rclone?

I use rclone move to transfer files every hour from a server to Wasabi (S3) with the following command:

rclone move $src $target --include="*.csv.gz" -v --no-traverse

Lately, I have been contacted by Wasabi support that my egress volume is too high. However, I barely download anything from Wasabi. When I enabled logging on Wasabi, I see a huge number of requests like:

... [29/Sep/2020:05:25:03 +0000] ... REST.GET.BUCKET - "GET https://s3.eu-central-1.wasabisys.com/...?delimiter=...&max-keys=1024&prefix=..." 200 - - 0 18 0 "" "rclone/v1.36" -

One response I checked was 0.2mb. The directories contain thousands of files.
Yesterday, I have added --no-traverse, but it does not seem to make a difference.

I am now looking for a solution which simply moves the files which match the given pattern to Wasabi without listing or checking anything. If I do not find a solution within the next ~36 hours, Wasabi will disable access to my data.

What is your rclone version (output from rclone version)

rclone v1.36

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Ubuntu 18.04.3 LTS

Which cloud storage system are you using? (eg Google Drive)

Wasabi

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone move $src $target --include="*.csv.gz" -v --no-traverse

The rclone config contents with secrets removed.

[wasabi]
type = s3
env_auth = false
access_key_id = ...
secret_access_key = ...
region = eu-central-1
endpoint = s3.eu-central-1.wasabisys.com
location_constraint =
acl =
server_side_encryption =

A log from the command with the -vv flag

It's a bit difficult to provide this right now. However, I can provide it later if necessary.

This version is quite a few years old. Please try again with the latest available version (1.53).

Adding --fast-list may also help reduce the number of requests.

Thanks. I have updated the version. It will take ~2h before I can say if this has helped. However, from doing some local tests, I do not expect that this will solve the issue.

What surprises me is that the directory listing happens at all even if --no-traverse is provided. Is this expected?

Reducing the number of requests (--fast-list) will not help unless the transferred data is reduced as well.

You may also have luck with reducing transactions with --s3-no-check-bucket and --no-check-dest --retries 1.

Hopefully, the combination of --no-traverse --fast-list --s3-no-check-bucket is enough since --no-check-dest --retries 1 is pretty drastic.

Please paste the debug logs if neither of them help.

Thanks! --no-check-dest looks promising. I will update the thread once I have results.

It looks like --no-check-dest has done the trick (this is the only option I have added).

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.