Union downloading slower than individually

If it's reproducible, pick one file, reproduce the issue, share the log...

2021/11/21 22:30:34 DEBUG : rclone: Version "v1.57.0" starting with parameters ["rclone" "copy" "respaldo:/encriptado/67l52k4ilfbjc30kra4sokd7eg" "prueba:" "--progress" "--log-file=mylogfile.txt" "-vv"]
2021/11/21 22:30:34 DEBUG : Creating backend with remote "respaldo:/encriptado/67l52k4ilfbjc30kra4sokd7eg"
2021/11/21 22:30:34 DEBUG : Using config file from "D:\\googledrive\\rclone.conf"
2021/11/21 22:30:35 DEBUG : Google drive root 'encriptado/67l52k4ilfbjc30kra4sokd7eg': 'root_folder_id = 0AIxxqvVOU8FUUk9PVA' - save this in the config to speed up startup
2021/11/21 22:30:36 DEBUG : fs cache: renaming cache item "respaldo:/encriptado/67l52k4ilfbjc30kra4sokd7eg" to be canonical "respaldo:encriptado/67l52k4ilfbjc30kra4sokd7eg"
2021/11/21 22:30:36 DEBUG : Creating backend with remote "prueba:"
2021/11/21 22:30:36 DEBUG : Creating backend with remote "D:/prueba1"
2021/11/21 22:30:36 DEBUG : fs cache: renaming cache item "D:/prueba1" to be canonical "//?/D:/prueba1"
2021/11/21 22:30:36 DEBUG : fs cache: switching user supplied name "D:/prueba1" for canonical name "//?/D:/prueba1"
2021/11/21 22:30:36 DEBUG : Creating backend with remote "D:/prueba"
2021/11/21 22:30:36 DEBUG : fs cache: renaming cache item "D:/prueba" to be canonical "//?/D:/prueba"
2021/11/21 22:30:36 DEBUG : fs cache: switching user supplied name "D:/prueba" for canonical name "//?/D:/prueba"
2021/11/21 22:30:36 DEBUG : union root '': actionPolicy = *policy.EpAll, createPolicy = *policy.EpMfs, searchPolicy = *policy.FF
2021/11/21 22:30:36 DEBUG : union root '': Waiting for checks to finish
2021/11/21 22:30:36 DEBUG : union root '': Waiting for transfers to finish
2021/11/21 22:45:23 DEBUG : 0idea036khbi7up1o1v5tnd44klrr2rt6sfjb3495lbv7g36416uigco0ng5jh0a3tig86cq4e65pkcf1q261hnv3la57ks68pocp68: Reopening on read failure after 9649594489 bytes: retry 1/10: read tcp 192.168.0.120:2014->172.217.173.234:443: wsarecv: An existing connection was forcibly closed by the remote host.
2021/11/21 22:45:23 DEBUG : 0ojll6jt1jlb584c2ihm54pdv0dpg378iaq7rbjpl5r0f470414a60bljpklilflrjlbbmpcuei1k: Reopening on read failure after 11376408251 bytes: retry 1/10: read tcp 192.168.0.120:2016->172.217.173.234:443: wsarecv: An existing connection was forcibly closed by the remote host.
2021/11/21 22:45:23 DEBUG : 1gvkuc712u0qk8rdm4qkh9j35nq4sc2q9t641k1bd4qspfl20c998ipipsgnvhem1tdkvdht7hsb6: Reopening on read failure after 11788988615 bytes: retry 1/10: read tcp 192.168.0.120:2017->172.217.173.234:443: wsarecv: An existing connection was forcibly closed by the remote host.
2021/11/21 22:45:30 DEBUG : 2an3nrvg03eb7uvotloasvl2gfq1f33f15p6soot1obsoib1p2hi14rvnk855r1qi8ofsrt079ab2hjqaf4vvqn1nv306lj2dk6livo: Reopening on read failure after 6322811454 bytes: retry 1/10: read tcp 192.168.0.120:2015->172.217.173.234:443: wsarecv: An existing connection was forcibly closed by the remote host.
2021/11/21 22:59:36 DEBUG : 0idea036khbi7up1o1v5tnd44klrr2rt6sfjb3495lbv7g36416uigco0ng5jh0a3tig86cq4e65pkcf1q261hnv3la57ks68pocp68: md5 = b2f7f4813b92aed79c47a490a56ea60c OK
2021/11/21 22:59:36 INFO  : 0idea036khbi7up1o1v5tnd44klrr2rt6sfjb3495lbv7g36416uigco0ng5jh0a3tig86cq4e65pkcf1q261hnv3la57ks68pocp68: Copied (new)
2021/11/21 22:59:36 DEBUG : respaldo: Loaded invalid token from config file - ignoring
2021/11/21 22:59:36 DEBUG : Saving config "token" in section "respaldo" of the config file
2021/11/21 22:59:36 DEBUG : Keeping previous permissions for config file: -rw-rw-rw-
2021/11/21 22:59:36 DEBUG : respaldo: Saved new token in config file
2021/11/21 23:03:19 DEBUG : rclone: Version "v1.57.0" starting with parameters ["rclone" "copy" "respaldo:/encriptado/67l52k4ilfbjc30kra4sokd7eg" "prueba" "--progress" "--log-file=mylogfile.txt" "-vv"]
2021/11/21 23:03:19 DEBUG : Creating backend with remote "respaldo:/encriptado/67l52k4ilfbjc30kra4sokd7eg"
2021/11/21 23:03:19 DEBUG : Using config file from "D:\\googledrive\\rclone.conf"
2021/11/21 23:03:20 DEBUG : Google drive root 'encriptado/67l52k4ilfbjc30kra4sokd7eg': 'root_folder_id = 0AIxxqvVOU8FUUk9PVA' - save this in the config to speed up startup
2021/11/21 23:03:20 DEBUG : fs cache: renaming cache item "respaldo:/encriptado/67l52k4ilfbjc30kra4sokd7eg" to be canonical "respaldo:encriptado/67l52k4ilfbjc30kra4sokd7eg"
2021/11/21 23:03:20 DEBUG : Creating backend with remote "prueba"
2021/11/21 23:03:20 DEBUG : fs cache: renaming cache item "prueba" to be canonical "//?/D:/googledrive/prueba"
2021/11/21 23:03:21 DEBUG : Local file system at //?/D:/googledrive/prueba: Waiting for checks to finish
2021/11/21 23:03:21 DEBUG : Local file system at //?/D:/googledrive/prueba: Waiting for transfers to finish
2021/11/21 23:03:21 INFO  : Writing sparse files: use --local-no-sparse or --multi-thread-streams 0 to disable
2021/11/21 23:03:21 DEBUG : 0idea036khbi7up1o1v5tnd44klrr2rt6sfjb3495lbv7g36416uigco0ng5jh0a3tig86cq4e65pkcf1q261hnv3la57ks68pocp68: Starting multi-thread copy with 4 parts of size 5.783Gi
2021/11/21 23:03:21 DEBUG : 2an3nrvg03eb7uvotloasvl2gfq1f33f15p6soot1obsoib1p2hi14rvnk855r1qi8ofsrt079ab2hjqaf4vvqn1nv306lj2dk6livo: Starting multi-thread copy with 4 parts of size 8.018Gi
2021/11/21 23:03:21 DEBUG : 0ojll6jt1jlb584c2ihm54pdv0dpg378iaq7rbjpl5r0f470414a60bljpklilflrjlbbmpcuei1k: Starting multi-thread copy with 4 parts of size 9.226Gi
2021/11/21 23:03:21 DEBUG : 0idea036khbi7up1o1v5tnd44klrr2rt6sfjb3495lbv7g36416uigco0ng5jh0a3tig86cq4e65pkcf1q261hnv3la57ks68pocp68: multi-thread copy: stream 4/4 (18628214784-24837534376) size 5.783Gi starting
2021/11/21 23:03:21 DEBUG : 0idea036khbi7up1o1v5tnd44klrr2rt6sfjb3495lbv7g36416uigco0ng5jh0a3tig86cq4e65pkcf1q261hnv3la57ks68pocp68: multi-thread copy: stream 1/4 (0-6209404928) size 5.783Gi starting
2021/11/21 23:03:21 DEBUG : 1gvkuc712u0qk8rdm4qkh9j35nq4sc2q9t641k1bd4qspfl20c998ipipsgnvhem1tdkvdht7hsb6: Starting multi-thread copy with 4 parts of size 8.749Gi
2021/11/21 23:03:21 DEBUG : 0idea036khbi7up1o1v5tnd44klrr2rt6sfjb3495lbv7g36416uigco0ng5jh0a3tig86cq4e65pkcf1q261hnv3la57ks68pocp68: multi-thread copy: stream 2/4 (6209404928-12418809856) size 5.783Gi starting
2021/11/21 23:03:21 DEBUG : 1gvkuc712u0qk8rdm4qkh9j35nq4sc2q9t641k1bd4qspfl20c998ipipsgnvhem1tdkvdht7hsb6: multi-thread copy: stream 4/4 (28182773760-37576999462) size 8.749Gi starting
2021/11/21 23:03:21 DEBUG : 1gvkuc712u0qk8rdm4qkh9j35nq4sc2q9t641k1bd4qspfl20c998ipipsgnvhem1tdkvdht7hsb6: multi-thread copy: stream 1/4 (0-9394257920) size 8.749Gi starting
2021/11/21 23:03:21 DEBUG : 0idea036khbi7up1o1v5tnd44klrr2rt6sfjb3495lbv7g36416uigco0ng5jh0a3tig86cq4e65pkcf1q261hnv3la57ks68pocp68: multi-thread copy: stream 3/4 (12418809856-18628214784) size 5.783Gi starting
2021/11/21 23:03:21 DEBUG : 0ojll6jt1jlb584c2ihm54pdv0dpg378iaq7rbjpl5r0f470414a60bljpklilflrjlbbmpcuei1k: multi-thread copy: stream 1/4 (0-9906421760) size 9.226Gi starting
2021/11/21 23:03:21 DEBUG : 1gvkuc712u0qk8rdm4qkh9j35nq4sc2q9t641k1bd4qspfl20c998ipipsgnvhem1tdkvdht7hsb6: multi-thread copy: stream 2/4 (9394257920-18788515840) size 8.749Gi starting
2021/11/21 23:03:21 DEBUG : 1gvkuc712u0qk8rdm4qkh9j35nq4sc2q9t641k1bd4qspfl20c998ipipsgnvhem1tdkvdht7hsb6: multi-thread copy: stream 3/4 (18788515840-28182773760) size 8.749Gi starting
2021/11/21 23:03:21 DEBUG : 2an3nrvg03eb7uvotloasvl2gfq1f33f15p6soot1obsoib1p2hi14rvnk855r1qi8ofsrt079ab2hjqaf4vvqn1nv306lj2dk6livo: multi-thread copy: stream 4/4 (25827409920-34436520161) size 8.018Gi starting
2021/11/21 23:03:21 DEBUG : 2an3nrvg03eb7uvotloasvl2gfq1f33f15p6soot1obsoib1p2hi14rvnk855r1qi8ofsrt079ab2hjqaf4vvqn1nv306lj2dk6livo: multi-thread copy: stream 1/4 (0-8609136640) size 8.018Gi starting
2021/11/21 23:03:21 DEBUG : 2an3nrvg03eb7uvotloasvl2gfq1f33f15p6soot1obsoib1p2hi14rvnk855r1qi8ofsrt079ab2hjqaf4vvqn1nv306lj2dk6livo: multi-thread copy: stream 2/4 (8609136640-17218273280) size 8.018Gi starting
2021/11/21 23:03:21 DEBUG : 2an3nrvg03eb7uvotloasvl2gfq1f33f15p6soot1obsoib1p2hi14rvnk855r1qi8ofsrt079ab2hjqaf4vvqn1nv306lj2dk6livo: multi-thread copy: stream 3/4 (17218273280-25827409920) size 8.018Gi starting
2021/11/21 23:03:21 DEBUG : 0ojll6jt1jlb584c2ihm54pdv0dpg378iaq7rbjpl5r0f470414a60bljpklilflrjlbbmpcuei1k: multi-thread copy: stream 4/4 (29719265280-39625669720) size 9.226Gi starting
2021/11/21 23:03:21 DEBUG : 0ojll6jt1jlb584c2ihm54pdv0dpg378iaq7rbjpl5r0f470414a60bljpklilflrjlbbmpcuei1k: multi-thread copy: stream 2/4 (9906421760-19812843520) size 9.226Gi starting
2021/11/21 23:03:21 DEBUG : 0ojll6jt1jlb584c2ihm54pdv0dpg378iaq7rbjpl5r0f470414a60bljpklilflrjlbbmpcuei1k: multi-thread copy: stream 3/4 (19812843520-29719265280) size 9.226Gi starting

I put to download a different folder and I didint see api limited rate, but the union still downloaded slower than a local download

Screenshot by Lightshothttps://prnt.sc/20fpbh4

Now you got networking issues going on as a firewall/router is closing your connections so that's going to be slower.

Ok, that not explains why if I donwload to a single folder I dont have those problems.

Maybe I cant expalain myself, Downloading to D:prueba I download to 82 Mbytes/s, downloading to an union of D:/prueba and D:/prueba1 downloads at 47 Mbytes/s maximum

And we've explained why based on the logs you've shared so far.

You are getting rate limited.
You are getting network errors.

To avoid API limits, you need to reduce transfers/checkers and validate you are using your own client ID.

Did you see the picture I uploaded? Its the same disk, the sme remote and the same command

I did see the picture as I just report the data in the logs though so that's where we are.

Why I dont hit firewall or router problems if I download directly to one folder?

I don't imagine I'd have any luck trouble shooting your home network setup.

I just read the logs, share the data in the logs and report on data.

Perhaps your network can't handle it? Old gear? Windows? Old Driver? Old router?

Perhaps your network can't handle it? Old gear? Windows? Old Driver? Old router?

But If I download from that same remote on that same disk at 82 Mbytes/s cant be the router or windows.

Question is why only happens on an union.

Can I give you anydesk access and you check?

That's a bit beyond my scope of support.

Your best bet would be to limit the API hits by reducing some parameters as that's what the data is telling you.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.