Speed slow compared to IDM

What is the problem you are having with rclone?

When trying to download an url using copyurl speed is only about 15MB/sec

The same url via Internet Download Manager is averaging 100 MB/sec

What is your rclone version (output from rclone version)

rclone-v1.54.1

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Windows Server 2019 64 bit

Which cloud storage system are you using? (eg Google Drive)

Local Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copyurl domain.com/filename filename 

The rclone config contents with secrets removed.

Empty

A log from the command with the -vv flag

NONE

A log of the command with debug would super help as it shows the details. I can guess the issue but so much easier with a log.

Not exactly sure what you are asking. I do not have a config file setup, as I am simply copying an url to local drive. -vv does not give me anything.

rclone command and speeds
IDM Speeds

Just trying to figure out why, or if there is a setting I can use to increase speeds.

I have 100 - 1TB files I need to download, and IDM is NOT a good solution the it creates part files, then rebuilds.

I thought rclone would be perfect for this.

I am downloading from fast cloud server, to my VPS which has 10GBps connection.

Run that command with -vv and share the whole log.

If you look at the rclone image link I sent I did run that command like this

rclone copyurl https://domain.com/file.vhd file.vhd -vv

I get nothing.

This image is a screenshot of rclone response
rclone screenshot

That's strange as I get output on a test file and you get nothing at all?

felix@gemini:~$ rclone copyurl http://ipv4.download.thinkbroadband.com/1GB.zip test.zip -vv
2021/03/13 09:02:17 DEBUG : rclone: Version "v1.54.1" starting with parameters ["rclone" "copyurl" "http://ipv4.download.thinkbroadband.com/1GB.zip" "test.zip" "-vv"]
2021/03/13 09:02:17 DEBUG : Creating backend with remote "."
2021/03/13 09:02:17 DEBUG : Using config file from "/opt/rclone/rclone.conf"
2021/03/13 09:02:17 DEBUG : fs cache: renaming cache item "." to be canonical "/home/felix"
2021/03/13 09:02:51 INFO  :
Transferred:            1G / 1 GBytes, 100%, 30.170 MBytes/s, ETA 0s
Transferred:            1 / 1, 100%
Elapsed time:        34.4s

2021/03/13 09:02:51 DEBUG : 4 go routines active
felix@gemini:~$

C:\rclone-v1.54.1>rclone copyurl https://domain.com/file.vhd file.vhd -vv
2021/03/13 13:38:11 DEBUG : rclone: Version "v1.54.1" starting with parameters ["rclone" "copyurl" "https://domain.com/file.vhd" "file.vhd" "-vv"]
2021/03/13 13:38:11 DEBUG : Creating backend with remote "."
2021/03/13 13:38:11 NOTICE: Config file "C:\Users\Administrator\.config\rclone\rclone.conf" not found - using defaults
2021/03/13 13:38:11 DEBUG : fs cache: renaming cache item "." to be canonical "//?/C:/rclone-v1.54.1"
2021/03/13 13:39:11 INFO :
Transferred: 763.125M / 1.000 TBytes, 0%, 12.829 MBytes/s, ETA 22h41m16s
Transferred: 0 / 1, 0%
Elapsed time: 1m0.7s
Transferring:

  •                                   file.vhd:  0% /1.000T, 12.691M/s, 22h56m4s
    

2021/03/13 13:40:11 INFO :
Transferred: 1.582G / 1.000 TBytes, 0%, 13.559 MBytes/s, ETA 21h26m54s
Transferred: 0 / 1, 0%
Elapsed time: 2m0.7s
Transferring:

  •                                   file.vhd:  0% /1.000T, 13.155M/s, 22h6m29s
    

2021/03/13 13:41:11 INFO :
Transferred: 2.288G / 1.000 TBytes, 0%, 13.057 MBytes/s, ETA 22h15m29s
Transferred: 0 / 1, 0%
Elapsed time: 3m0.7s
Transferring:

  •                                   file.vhd:  0% /1.000T, 8.352M/s, 34h47m52s
    

2021/03/13 13:42:11 INFO :
Transferred: 3.379G / 1.000 TBytes, 0%, 14.448 MBytes/s, ETA 20h5m35s
Transferred: 0 / 1, 0%
Elapsed time: 4m0.7s
Transferring:

  •                                   file.vhd:  0% /1.000T, 17.960M/s, 16h9m51s

Download managers tend to use multiple streams which copyurl does not do. You can do something like this and up the streams if you want.

Here is how it works to figure out number of streams:

https://rclone.org/docs/#multi-thread-cutoff-size

felix@gemini:~$ rclone copy --http-url http://ipv4.download.thinkbroadband.com :http:1GB.zip /home/felix -vv --multi-thread-streams 8 --multi-thread-cutoff 128M
2021/03/13 09:06:21 DEBUG : rclone: Version "v1.54.1" starting with parameters ["rclone" "copy" "--http-url" "http://ipv4.download.thinkbroadband.com" ":http:1GB.zip" "/home/felix" "-vv" "--multi-thread-streams" "8" "--multi-thread-cutoff" "128M"]
2021/03/13 09:06:21 DEBUG : Creating backend with remote ":http:1GB.zip"
2021/03/13 09:06:21 DEBUG : Using config file from "/opt/rclone/rclone.conf"
2021/03/13 09:06:21 DEBUG : Creating backend with remote "/home/felix"
2021/03/13 09:06:22 DEBUG : 1GB.zip: Need to transfer - File not found at Destination
2021/03/13 09:06:22 DEBUG : 1GB.zip: Starting multi-thread copy with 8 parts of size 128M
2021/03/13 09:06:22 DEBUG : 1GB.zip: multi-thread copy: stream 8/8 (939524096-1073741824) size 128M starting
2021/03/13 09:06:22 DEBUG : 1GB.zip: multi-thread copy: stream 4/8 (402653184-536870912) size 128M starting
2021/03/13 09:06:22 DEBUG : 1GB.zip: multi-thread copy: stream 6/8 (671088640-805306368) size 128M starting
2021/03/13 09:06:22 DEBUG : 1GB.zip: multi-thread copy: stream 7/8 (805306368-939524096) size 128M starting
2021/03/13 09:06:22 DEBUG : 1GB.zip: multi-thread copy: stream 3/8 (268435456-402653184) size 128M starting
2021/03/13 09:06:22 DEBUG : 1GB.zip: multi-thread copy: stream 5/8 (536870912-671088640) size 128M starting
2021/03/13 09:06:22 DEBUG : 1GB.zip: multi-thread copy: stream 1/8 (0-134217728) size 128M starting
2021/03/13 09:06:22 DEBUG : 1GB.zip: multi-thread copy: stream 2/8 (134217728-268435456) size 128M starting
2021/03/13 09:06:34 DEBUG : 1GB.zip: multi-thread copy: stream 1/8 (0-134217728) size 128M finished
2021/03/13 09:06:37 DEBUG : 1GB.zip: multi-thread copy: stream 5/8 (536870912-671088640) size 128M finished
2021/03/13 09:06:38 DEBUG : 1GB.zip: multi-thread copy: stream 4/8 (402653184-536870912) size 128M finished
2021/03/13 09:06:40 DEBUG : 1GB.zip: multi-thread copy: stream 8/8 (939524096-1073741824) size 128M finished
2021/03/13 09:06:43 DEBUG : 1GB.zip: multi-thread copy: stream 2/8 (134217728-268435456) size 128M finished
2021/03/13 09:06:45 DEBUG : 1GB.zip: multi-thread copy: stream 7/8 (805306368-939524096) size 128M finished
2021/03/13 09:06:50 DEBUG : 1GB.zip: multi-thread copy: stream 3/8 (268435456-402653184) size 128M finished
2021/03/13 09:06:53 DEBUG : 1GB.zip: multi-thread copy: stream 6/8 (671088640-805306368) size 128M finished
2021/03/13 09:06:53 DEBUG : 1GB.zip: Finished multi-thread copy with 8 parts of size 128M
2021/03/13 09:06:53 INFO  : 1GB.zip: Multi-thread Copied (new)
2021/03/13 09:06:53 INFO  :
Transferred:            1G / 1 GBytes, 100%, 32.103 MBytes/s, ETA 0s
Transferred:            1 / 1, 100%
Elapsed time:        32.2s

2021/03/13 09:06:53 DEBUG : 18 go routines active

as an example.

That looks perfect, not sure how I missed it.

For this particular file I already have downloaded 512GB, is there a way to resume where another app left off?

ie I am downloading file.vhd but I already have 512GB of it in file.vhd

I don't believe so to my knowledge. You might just wait it out.

@ncw - do you know why copyurl doesn't show -vv output nor grab the multithread?

@ Animosity022

That was a Perfect solution! Can I recommend this be a default setting for copyurl if the file being downloaded is above a certain size?

Just makes for less typing, but this is perfect, now I am getting the speeds I was hoping for.

I have NOT tried this via mounting gdrive, but will this config also work if I am choosing to save to gdrive?

Anyway Thank You very much.

Here is response from rclone using 10 threads, just incase anyone wants to see it in action.

2021/03/13 14:21:42 DEBUG : Creating backend with remote "file.vhd"
2021/03/13 14:21:42 DEBUG : fs cache: renaming cache item "file.vhd" to be canonical "//?/C:/rclone-v1.54.1/file.vhd"
2021/03/13 14:21:42 DEBUG : file.vhd: Need to transfer - File not found at Destination
2021/03/13 14:21:42 INFO : Writing sparse files: use --local-no-sparse or --multi-thread-streams 0 to disable
2021/03/13 14:21:42 DEBUG : file.vhd: Starting multi-thread copy with 10 parts of size 102.400G
2021/03/13 14:21:42 DEBUG : file.vhd: multi-thread copy: stream 4/10 (329853566976-439804755968) size 102.400G starting
2021/03/13 14:21:42 DEBUG : file.vhd: multi-thread copy: stream 10/10 (989560700928-1099511628288) size 102.400G starting
2021/03/13 14:21:42 DEBUG : file.vhd: multi-thread copy: stream 1/10 (0-109951188992) size 102.400G starting
2021/03/13 14:21:42 DEBUG : file.vhd: multi-thread copy: stream 7/10 (659707133952-769658322944) size 102.400G starting
2021/03/13 14:21:42 DEBUG : file.vhd: multi-thread copy: stream 5/10 (439804755968-549755944960) size 102.400G starting
2021/03/13 14:21:42 DEBUG : file.vhd: multi-thread copy: stream 2/10 (109951188992-219902377984) size 102.400G starting
2021/03/13 14:21:42 DEBUG : file.vhd: multi-thread copy: stream 9/10 (879609511936-989560700928) size 102.400G starting
2021/03/13 14:21:42 DEBUG : file.vhd: multi-thread copy: stream 6/10 (549755944960-659707133952) size 102.400G starting
2021/03/13 14:21:42 DEBUG : file.vhd: multi-thread copy: stream 8/10 (769658322944-879609511936) size 102.400G starting
2021/03/13 14:21:42 DEBUG : file.vhd: multi-thread copy: stream 3/10 (219902377984-329853566976) size 102.400G starting
2021/03/13 14:22:42 INFO :
Transferred: 6.054G / 1.000 TBytes, 1%, 103.440 MBytes/s, ETA 2h47m57s
Transferred: 0 / 1, 0%
Elapsed time: 1m0.5s
Transferring:

  •                                   file.vhd:  0% /1.000T, 112.980M/s, 2h33m46s

Yeah, I'm not sure on copyurl as that's why I pinged ncw on it to get his input as there might be a reason for the way it's coded.

copyurl doesn't use the same internals as rclone sync/copy/move so misses out on multithread copy and -vv output.

If what copyurl did was to make an http backend object instead then pass it to the normal copying routines then it would work better.

Note your workaround

Will only work if the directory has sensible listings - whereas if we were constructing the http object directly then we wouldn't need that.

It is a good idea for an improvement. Would you like to please make a new issue on github about it?

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.