Hubic - Unsolicited response received on idle HTTP: 408 Request Time-out

Hi,

Since 2-3 weeks, when using the below command with Hubic:

rclone -v --checksum --log-file /var/tmp/hubic-backup.log sync /backup/ hubic:default/backup/

I receive the below errors on all commands being run at Hubic via rclone:

Unsolicited response received on idle HTTP channel starting with "HTTP/1.0 408 Request Time-out\r\nCache-Control: no-cache\r\nConnection: close\r\nContent-Type: text/html\r\n\r\n408 Request Time-out\nYour browser didn’t send a complete request in time.\n\n

I already refreshed the token - it didn’t help.

What shall I do to make it work again?

Thx for your help.

I can see this too.

I don't think it affects anything though does it?

Can you make please make a new issue on github about it so I don't forget about it please?

I don’t know if the problems are related or not, but once I started seeing the above message I also started finding it incredibly difficult to actually upload due to:

Failed to copy: HTTP Error: 504: 504 Gateway Time-out

However, I’ve just been assuming that the problem is on hubiC’s end. I’ve been using rclone v1.36-241-g92294a4aβ for some time now and it used to work, so I don’t think it’s an rclone issue.

https://forums.hubic.com/showthread.php?159281-Cannot-move-file-Server-reply-504-Gateway-Time-out

Post is almost exactly a year old.

My personal experience with Hubic is with running the rclone integration tests, and I have to say that they have been really unreliable with hubic for 6 months or so.

Then I’m just going to be a jerk and run my backup script with watch…it can run over and over and over and over and over until it finally works. :slight_smile:

Actually, I had the same problem prior to the "unsolicited response received on idle HTTP". Had to run my backup script a couple of times until it finally could connect with hubic, but then it was running fine.
Since 2-3 weeks, when it finally connects, and starts running, I get this message.
Have opened an issue:

I’m just going to be a jerk and run my backup script with watch…it can run over and over and over and over and over until it finally works.

It's run 12 times since my last post (the rest of the day and all night over here in California) and managed to upload ONE file. Also tried the official hubiC Linux client and after hours and hours and hours all the status reports is "Authentication in progress." Something is seriously down on hubiC's end.

Scary to trust backups on an unreliable provider.

I agree. Fortunately, they’re not my only one. I also use CrashPlan, which is pretty reliable but slow. It also ONLY works with their custom client (which fortunately has a Linux variant) and has an all-or-nothing approach to keeping old versions. I thought using hubiC as a secondary backup with a more traditional mirroring approach seemed like a good idea.

Apparently hubiC is just overwhelmed. Maybe they should have stopped new subscriptions until they figured it out.

I’m currently transferring a lot of files and all is working fine here (UK) for me - maybe I’m sucking all that bandwidth and leaving you with none :slight_smile: (no that’s really not likely to be the case).

I had some issues with the authentication tokens failing due to running the process on multiple machines, and the prior authentication not renewing properly (I guess), but just re-authorising solved that.

If you have very large files and many transferrer threads configured, the transfers will be worse because all the big files will be contending for the 10mbit/s rate limiting and it would be better to have fewer transferrers in such case, so that a few large files transfer. On the other hand, with a lot of small files, there is an advantage to having a larger number of transferrers (and checkers) so that there is always something doing work.

In any case, the ‘Request Time-out’ message is a benign part of the protocol and should not (if I’ve understood what’s going on correctly - see https://github.com/ncw/rclone/issues/1583#issuecomment-320507344 for my understanding) affect the transfers.

[[[That is unless the latency to the server exceeds the timeout period, but given that it appears to be about 30 seconds, I think you’d have to be somewhere remote or with a really terrible link. Like out beyond the moon.]]]

Could be that connections to California are throttled… or something… but certainly from here, behaviour is just find at present, for me - despite these messages appearing, files still transfer and all is happy bunnies.

Regarding them being overwhelmed… transferring at 1MByte/s here (which is the 10MBit/s throttling that they state) across multiple transfers.

For example:

2017/08/06 14:52:41 INFO  :
Transferred:   4.129 GBytes (1000.190 kBytes/s)
Errors:                 1
Checks:                 0
Transferred:           27
Elapsed time:   1h12m9.2s
Transferring:
 *                   laptop/190806-justin.tar.gz:  8% done, 109.309 kBytes/s, ETA: 13h9m58s
 *                  laptop/Old40G/HWWork.tar.bz2: 49% done, 80.788 kBytes/s, ETA: 1h35m58s
 *                  laptop/Old40G/covers.tar.bz2: 87% done, 132.305 kBytes/s, ETA: 7m27s
 *                 laptop/Old40G/HW_Comp.tar.bz2: 85% done, 99.900 kBytes/s, ETA: 12m50s
 *               laptop/DriverDiscs/Misc.tar.bz2: 37% done, 91.628 kBytes/s, ETA: 1h9m26s
 *             laptop/DriverDiscs/Camera.tar.bz2: 19% done, 169.816 kBytes/s, ETA: 2h59m52s
 *             laptop/Old40G/StuffFromPC.tar.bz2: 70% done, 80.608 kBytes/s, ETA: 34m50s
 *          laptop/Required/Installation.tar.bz2: 62% done, 62.619 kBytes/s, ETA: 1h2m29s
 *          laptop/UnsortedRef/Computing.tar.bz2: 17% done, 108.742 kBytes/s, ETA: 4h4m34s
 *        laptop/UnsortedRef/Electronics.tar.bz2: 95% done, 126.696 kBytes/s, ETA: 11s

So it’s definitely working - maybe for those large compressed archives I should have used fewer transferrers, but it’s working and the 1 error was almost certainly a file that I’d set the permissions such that the backuper couldn’t access it, by accident.

I am getting "Unsolicited response received on idle HTTP channel starting with “HTTP/1.0 408 Request Time-out\r\nCache-Control: no-cache\r\nConnection: close” consistently even when doing “sync --size-only” local->hubic on two folders that are already identical (nothing to sync). It doesn’t happen on all folder pairs but it does happen on 1-2 (I only investigated one as it takes 7-8 minutes for each try).

–timeout=5m and --contimeout=5m have no influence. Also the messages appear in few seconds (10-20) after running the command so it can’t be any timeout that is 1m or 30s.

What DOES seem to make a difference is --checkers=1, that seems to get rid of the 408 messages.

Intersting! Thx. Do you know if checkers=1 has any impact on the performances?

I’m not sure at all what --checkers does especially on a sync that nothing is synced (also with --size-only). But for transfers I can’t imagine with hubic anyway slow as it is to get into any bottleneck even with one checker, whatever it’s doing.

I think for “rclone check” the --checkers would make a big difference - possibly slowing things down if there are too many! At least in the default config when comparing check-sums from hubic with check-sums from local drive it needs to hit the local drive (that would be the bottleneck) so more can hurt. Unless the bottleneck is per-core performance, then more checkers will go to multiple cores and more can help.

Scratch that, it seems that the checkers are used to walk the (remote, both?) filesystem when doing sync. If you have loads and loads of files it will take a long time to find out what to transfer.

Also it seems that not using --size-only is making things really, really slow on hubic (I presume because it needs to get also the timestamps for all files).

Hi
I also ran into the “408 request time out” issue and I found it to be much better (also not totally solved) with the --fast-list option… Instead of tenths of messages, I only got 5 of them…
My 2 cents…

Top tip - thank you.

--fast-list will do much fewer http operations which explains it.