Ssh remote limited to 60MB/s on 4Gb link

Apologies in advance if this confuses you. My goal was to utilize the NFS4.1 multipath connection (4x1Gb) that my VMware
ESXi host has to my NAS. I could not get NFS4.1 multipath working directly in Ubuntu. I have an rclone mount on a Ubuntu server using option 29 SSH to the ESXi host. Both have 10Gbit NICs. When I copy something to/from the NAS via the rclone mount I can see it is using all 4 links to the NAS but the speed is limited to 60MBytes/s (480Mbit). The connection between the Ubuntu server and the ESXi host is 10gig and the disks are nvme on the Ubuntu server so I should be seeing ~450Mbytes/s. I have seen the ESXi host saturate the 4x 1Gigabit links and the CPU on all devices do not seem to be taxed so I'm wondering why I am seeing only 60MB/s instead of 400MB/s?

Any help appreciated. Thanks

Can you provide what you've tried as far as flags/config/command line? Size and number of files you're transferring (use case)?

I didn't use any flags for the mount, just... rclone mount NAS: /mnt/NAS
For the remote I chose password auth and weak security 128 bit = true because it's a local connection so thought it would tax the CPU less but everything else is default. I was testing with 10Gbyte MP4 video files and all connections from Ubuntu server > ESXi Host > NAS use 9K jumbo frames links.

I just tested with... cp /mnt/NAS/Edit.mp4 .

so you copying from local server to local server, no cloud remote?
if so, is there a specific reason you are using rclone mount, instead of rclone copy or rsync?

i have found that, for the rclone mount, adding https://rclone.org/commands/rclone_mount/#vfs-cache-mode-writes, can greatly speed transfers.

with --vfs-cache-mode=writes

image

without --vfs-cache-mode=writes
image

I'm using mount so I can access copy or edit video files on the NAS. Unless I'm mistaken rclone copy would mean manually executing that for each file when needed and would involve copying locally each time?

Thanks for the tip on vfs cache, I'll try that.

vfs-cache-mode writes is obfuscating the speed though.

It makes a local copy first, which is why it goes fast and you then upload it at your slow speed again. You need vfs-cache-mode writes for certain write situations and that'll show up in the rclone logs if you need it.

Before you get to the mount, are you able to execute some rclone copy tests to see if that is getting better results? It's a bit easier to tune that and we can probably move to the mount and tune that depending on what we see.

Thanks. I thought previously vfs options were only for caching. I don't particularly want to cache things locally as the local storage is limited (nvme SSD) which is why I want to be able to work directly off the NAS, ideally at 4Gbps :slightly_smiling_face:. Caching 1 file at a time is fine of course as long as it gets deleted afterwards. I have used cache remotes previously which kept files for longer than I needed.

I will try some rclone copy commands to see if the speed is the same as the mount speed

You can only write to the mount sequentially though so that is a limitation:

https://rclone.org/commands/rclone_mount/#limitations

If you doing certain operations that require caching, you'd have to turn on writes as it has to write locally first and upload it after.

https://rclone.org/commands/rclone_mount/#vfs-cache-mode-writes

So assuming I am using vfs-cache-mode writes, that's copying the file down locally, then I'm editing it and then when I click Save it's copying it back? So I should expect to see Mac possible speed because all that is being done sequentially? And rclone copy should replicate that when I do it manually? Are there any other tweaks or best settings for the remote itself since I'm using it locally and now over the internet so security is not a concern? Thanks

@Animosity022 Just tried "rclone copy" from NAS to local Ubuntu server and it peaked @ 60MB/s but evened out at 43MB/s. Esxi host is currently using 3 links, You can see the Total graph and 2 of the links in the pic below. I then ran some operations on a VM running on the ESXi host and it maxed out the 3 links so the NAS or Esxi host do not seem to be the bottleneck.

Can you share:

rclone version

rclone.conf that you are using without any passwords/secrets

and the full command you are running.

I have been trawling through posts on Google and it seems VMWare throttles SCP / transfers over SSH so I dont think the issue is with rclone. Thanks for the assistance in troubleshooting this

Is that an old version of vmware or something current?

You can potentially get around that with using rclone serve as well and using http or something perhaps.

Im using Esxi v6.5 which is still fully supported. Latest version is 7.0.

I had not heard of rclone serve. Does that mean installing rclone on the ESXi host and essentially running a service itself? I wouldnt be opposed to that if it worked :slight_smile:

If its is compatible with ESXi what would be the best (fastest/light weight) protocol to setup?

I'd say 6.5 is pretty current as all the articles I saw were from like v4 as we're mainly 6.5 as well since v7 is very new.

When you say the ESXi host, the remote is the ssh login for that? If so, I'm not sure that would work well as installing rclone on that would be problematic I would think.

I am curious myself now and I was going to test that myself on Monday.

I found this from a few years ago which you closed in Jan that seems to suggest a fix to golang allows rclone to be run on an ESXi. I will attempt to install rclone on the ESXi host and run "rclone serve ftp"
Would ftp be the best option for what I am trying to do? If so I will try that and then setup a new remote on the Ubuntu server and hopefully that will allow me to max out that connection

I think rclone serve ftp would be a good start and you can see how that works out.

No luck with standard Linux binary

[root@ESXi:/tmp/rclone-v1.52.1-linux-amd64] /tmp/rclone-v1.52.1-linux-amd64/rclone
Segmentation fault

What does:

uname -a

show?

The GO Bug seems to state the vmware kernel is based on 2.4 which won't work and it was closed as there is not a solution unfortunately.