FTP Upload Timed Out

What is the problem you are having with rclone?

When uploading large files (more than 4~5 GB in size) by FTP the transfers get restarted from the beginning once it reach 100%. When using FileZilla, I've get an timeout but, the transfers get aborted and the file "stay" on the server, it doesn't restart or retry like rclone does.

What is your rclone version (output from rclone version)

The version that I'd use is v1.55.1

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Ubuntu 20.04 x64 (server) updated and the latest version of Raspberry PiOS has has been use for testing with FileZilla.

Which cloud storage system are you using? (eg Google Drive)

I'd use FTP, the server is hosted by myself in my server room at L'Orignal ON.
For the clients, they are located: one at OVH Canada in Beauharnois QC. and the other at my other house in Hawkesbury ON.

The target FTP server is ProFTPd on FreeNAS (all updated).

The command you were trying to run (eg rclone copy /tmp remote:tmp)

nohup rclone move --transfers=1 /mnt/disk/ZAWACK/2021-07-17/ video4-1:/2021/2021-07-17/ &

The rclone config contents with secrets removed.

[video4-1]
type = ftp
host = datacenter.zawack.net
user = video4
port = 101
pass = 

A log from the command with the -vv flag

The log is available here: [http://pastebin.zsites.ca/view/a00782ee](http://pastebin.zsites.ca/view/a00782ee)

I know there a timeout as it also happen with FileZilla but at the opposite of FileZilla, with rclone the transfers get retried and the file get re-uploaded and if I let it going it can loop till I'd stop it (I don't know if there a limit), it is possible to avoiding that to happen?

Kind regards,

Guillaume

hi,

low level retry 2/10
might try to tweak https://rclone.org/docs/#low-level-retries-number

rclone has much better support for sftp, including modtimes, checksums and enhanced security.

Hello,

I add --low-level-retries 1 but it still not working. The log show this now:

http://pastebin.zsites.ca/view/095c61ea

Regards,

Guillaume

--low-level-retries 1 was to stop rclone from looping.

as for the i/o timeout i do not know, as i have not use ftp in many years.
of any rclone backend, seems that ftp has the most problems.
rclone has a couple of timeout flags that you can tweak, have you tried that?

given that the ftp server supports sftp, why not use that or use rclone serve sftp

and when posting a log, make sure it is a full log, not just a snippet.

hi gillaume
try this dev release

let me know if it solves your issue
if yes, i'll merge the fix mainstream by 1.57
at any rate i recommend sftp too, our ftp support is incomplete
thx, ivan

Hi,

I tried but at first sight it seem not working but, how to make sure to use the rclone binary from the .zip file, ./rclone right?

Regards,

Guillaume

edit: wrong version downloaded, I'd restart the test at the moment, it's very late here :wink:

Hello,

It works now:

10GB file at the client side:

10GB file now in the server side:

Thank-you, I appreciate it.

Regards,

Guillaume

good. i'll then submit a patch for rclone 1.57

1 Like

wait.. on the screenshot i see an http download. i thought we are fixing ftp upload. do i miss something?

No, it is because I had to download a large file to be able make the test. When I'd took the screenshot I make it wider just to show the file size in bytes as a proof that the file get completely uploaded to the FTP server. I'd download the 10GB test file via HTTP.

Your patch really fixed the issue and I really want to say thank-you for that.

Regards,

Guillaume

1 Like

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.