Tree, copy and sync commands infinite loop

What is the problem you are having with rclone?

I'm trying to copy bucket from one s3 storage host to another. But if I try to use tree command on source bucket or copy/sync command, they can not be finished and cause a large memory consumption.

Run the command 'rclone version' and share the full output of the command.

rclone v1.59.0

  • os/version: darwin 12.4 (64 bit)
  • os/kernel: 21.5.0 (arm64)
  • os/type: darwin
  • os/arch: arm64
  • go/version: go1.18.3
  • go/linking: dynamic
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Local company's AWS S3 implementation.

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone -vv tree s3:tvisual_accounts

The rclone config contents with secrets removed.

[s3]
type = s3
provider = AWS
access_key_id = *
secret_access_key = *
region = msk
endpoint = https://s3.*.ru

A log from the command with the -vv flag

2022/07/14 19:18:40 DEBUG : rclone: Version "v1.59.0" starting with parameters ["rclone" "-vv" "tree" "s3:tvisual_accounts"]
2022/07/14 19:18:40 DEBUG : Creating backend with remote "s3:tvisual_accounts"
2022/07/14 19:18:40 DEBUG : Using config file from "/Users/*/.config/rclone/rclone.conf"

what does that mean?
are you using AWS or what?

Yes. It is AWS S3 tuned for a company needs. Installed on company server, not AWS cloud

guess i still do not undertand what AWS software is run on the local server?
AWS created the software, using minio or what?

and what does infinite loop mean?

I know only that it is s3 and as sre team say it implements 90% of AWS s3 interfaces.
Infinite loop I mean that this tree command execute infinitely. It doesn’t finish. But only for this bucket. Another bucket with only one directory inside it copying without problem

It's easier to have numbers rather than words that could mean different things to different people.

What does large memory consumption mean? 1GB / 4GB / 8GB / 64GB?

How many objects are in the bucket you are trying to list? How long are you waiting? Did you try --fast-list?

if the cloud provider is not offical AWS S3, then need to change
provider = AWS
to
provider = Other

Thank you very much:) It really helps with my problem. After changing provider to "Other" it started to work as expected. I've been thinking all the time that if it implements s3 api I should use AWS provider...

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.