Slow Copy for large files

The copy command keeps on running for long time even after it says it has completed download as you can see it completed in 34 min but kept on running till 54 min.I had forced v2 as i read its recommended for large file but i found similar results without it as well

What is the problem you are having with rclone?

Slow download speed from GDrive to Local

What is your rclone version (output from rclone version)

root@odroid:/media/usbhd# rclone version
rclone v1.52.0

  • os/arch: linux/arm64
  • go version: go1.14.3

Which OS you are using and how many bits (eg Windows 7, 64 bit)

root@odroid:/media/usbhd# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.4 LTS
Release: 18.04
Codename: bionic

root@odroid:/media/usbhd# uname -a
Linux odroid 3.16.81-49 #1 SMP PREEMPT Wed Jan 15 21:38:53 -02 2020 aarch64 aarch64 aarch64 GNU/Linux

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy drive:Bat.mkv" /media/usbhd/ -vv

The rclone config contents with secrets removed.

[drive]
type = drive
client_id = XXX
client_secret = XX
scope = drive
token =
root_folder_id = 0AOlzu3J8ADKnUk9PVA

A log from the command with the -vv flag

DEBUG : Bat.mkv: multi-thread copy: stream 3/4 (19197067264-28795600896) size 8.939G starting
DEBUG : Bat.mkv: multi-thread copy: stream 1/4 (0-9598533632) size 8.939G starting
DEBUG : Bat.mkv: multi-thread copy: stream 2/4 (9598533632-19197067264) size 8.939G starting
DEBUG : Bat.mkv: multi-thread copy: stream 4/4 (28795600896-38393964981) size 8.939G starting
DEBUG : Bat.mkv: Using v2 download:

2020/06/08 01:01:00 DEBUG : Bat.mkv: Finished multi-thread copy with 4 parts of size 8.939G
2020/06/08 01:01:16 INFO  :
Transferred:       35.757G / 35.757 GBytes, 100%, 17.440 MBytes/s, ETA 0s
Transferred:            0 / 1, 0%
Elapsed time:     34m59.4s
Transferring:
 *                                       Bat.mkv:100% /35.757G, 4.804M/s, 0s


2020/06/08 01:21:16 INFO  :
Transferred:       35.757G / 35.757 GBytes, 100%, 11.097 MBytes/s, ETA 0s
Transferred:            0 / 1, 0%
Elapsed time:     54m59.4s
Transferring:
 *                                       Bat.mkv:100% /35.757G, 0/s, 0s

2020/06/08 01:21:50 DEBUG : Bat.mkv: MD5 = 47177fd4407ebfc9e3d3914e33bc1935 OK
2020/06/08 01:21:50 INFO  : Bat.mkv: Multi-thread Copied (replaced existing)
2020/06/08 01:21:50 INFO  :
Transferred:       35.757G / 35.757 GBytes, 100%, 10.983 MBytes/s, ETA 0s
Transferred:            1 / 1, 100%
Elapsed time:     55m33.6s

2020/06/08 01:21:50 DEBUG : 2 go routines active```

On your system since you are copying it local, it has to calculate the downloaded checksum of the 35GB file and that is the extra time.

Anyways I can download it in a single stream or any way to speed it up ?

Or anyways I can see the progress of checksum ?

Watch you disk IO and you can see it going :slight_smile:

You can speed it up by SSD or a faster disk. It has to a read a 35GB file.

No i mean any progress bar which can show this ?

Also do i need to change the command to speed up the download I'm using the very basic command of copy ?

There's nothing that shows the progress of the md5sum going on.

You already doing multi threaded downloads.

What's your actual internet speed and what you expecting to get?

I understand its disk IO but if there would have something to show merge progress would have been great or maybe download entire file as a single part

Internet Speed is 150Mbps so most of time 30 GB gets downloaded in 30 mins and takes 35 mins to join probably because of weak CPU

I get your point as I think some of it is a trade off with displaying too much or not enough.

I think it's probably disk IO rather than CPU as it's got to read a 35GB file as the md5sum calculation is pretty light CPU wise.

"Most" use cases are doing lots of files so displaying a call for the checksum wouldn't matter much as it's fast. With larger files/slow disks, it does may make sense, but it does get complicated to give a percentage on that as you are talking size since the file is there and you'd have to compute rate stats and such. It's not trivial, but not insanely complex either.

There have been quite a few requests for the progress of the checksum recently. We could do this for the local backend I think... I'll have a think...

Would be great to have this

1 Like

Are you considering to take this up ?

I'm thinking about how it could be implemented, yes!

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.