RCLONE slow copy startup

What is the problem you are having with rclone?

I just opened a Wasabi account and am testing the performance. I have not uploaded a large file count to Wasabi yet.
When I start a copy from my local drive of a single large file say 10 - 80GB , it takes about 20-50 minutes before it actually starts transferring data. Once it starts, it is fairly quick to copy the data, along the lines of my ISP speed. Is this startup delay normal? Smaller files don't do this. The log doesn't log anything for the same amount of time until the transfer starts.

What is your rclone version (output from rclone version)

windows / amd64 go1.13.7

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Win 10 64bit 32 GB RAM 1TB SSD I7

Which cloud storage system are you using? (eg Google Drive)

Wasabi

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy c:\temp\filename.7z remote:bucketname -P

I have also tried
...
rclone copy c:\temp\filename.7z remote:bucketname --drive-chunk-size 128M --fast-list -P
...
No difference

The rclone config contents with secrets removed.

[remote]
type = s3
provider = Wasabi
env_auth = false
access_key_id = xxx
secret_access_key = xxx
endpoint = s3.wasabisys.com
acl = bucket-owner-full-control

A log from the command with the -vv flag

2020/05/29 07:18:23 DEBUG : rclone: Version "v1.51.0" starting with parameters ["rclone" "copy" "c:\\temp\\filename.7z" "remote:bucketname" "-P" "--log-file=mylog.txt" "--log-level" "DEBUG"]
2020/05/29 07:18:23 DEBUG : Using config file from "C:\\Users\\Administrator\\.config\\rclone\\rclone.conf"
2020/05/29 07:18:24 DEBUG : filename.7z: Need to transfer - File not found at Destination
2020/05/29 07:18:24 INFO  : S3 bucket bucketname: Bucket "bucketname" created with ACL "bucket-owner-full-control"
1 Like

hello and welcome to the forum,

  • i also use wasabi.
  • what version of rclone are you running?
  • you should use a log file to better understand what is going on, using flag -vv
  • before rclone will upload a file, it needs to calculate the checksum has for the file. the larger the file, the longer the delay before the upload starts.
  • --drive-chunk-size is a flag for gdrive, not for s3 remotes
    these flags can help with s3 wasabi,
    --s3-upload-concurrency
    --s3-chunk-size

The --drive chunk was just something I tried, it was not in my original command.
I added the log file commands to the existing original command and placed the log and version in my original post. Is the Checksum required or a way to disable? I will not be overwriting any files, just placing new ones.

  • the log is not helpful, does not have debug info, so add -vv to your command. you will better understand what is going on.
  • if you do not use checksum, then you will not know if there was a problem with the upload.
  • you did not post the version of rclone, can you post that. based on the go version, i would asume you are using v.1.51.0

rclone version

rclone v1.52.0

  • os/arch: windows/amd64
  • go version: go1.14.3

I will try -vv. I am using 1.51.0. This was my command with the log file that I posted in my original post. The I thought the level DEBUG statement would give the same info, sorry..
...
rclone copy c:\temp\filename.7z remote:bucketname -v -P --log-file=mylog.txt --log-level DEBUG
...

https://rclone.org/s3/#s3-disable-checksum

That appears to be it. With the checksum disabled it starts much sooner. I guess I will have to decide the chance of data corruption vs. taking a long time to start. Most of my files are numerous GB. Thanks..

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.