IpSuperTcp channels: SuperChannel TimeOut or Actively refused

We implemented the IpHttpServerChannel. And now we get the FIN_WAIT_2 state and a lot of ‘Connection was closed’ problems. Any idea’s? Please, the problem is getting very urgent…

Thanks.

Are FIN-WAIT-2 connections present on server or on client side? Are they present on all clients, or only on ones that are connected via VPN?

The FIN-WAIT-2 state means that connection was closed properly (remote side did not confirm that the connection is actually closed).

Possible solutions -

Ideal - to make VPN connection more reliable

If that is not possible then you can
a) Set client channel’s property KeepAlive to false and check if that helps.

b) If that did not help then adjust the FIN-WAIT-2 timeout after which such stalled sockets will be released by the system itself:


Set registry key:

Key : HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters
Value Type : REG_DWORD

Valid Range : 30–294,967,295
Default : 120

Recommended value : 30


Disable tcp protocol autotuning:

netsh int tcp set global autotuninglevel=disabled

Restart the host

They are present on servers side… And it’s not only the clients with VPN connections who are experiencing the problem but also some clients who work in the local network.

Did you try to adjust the settings?

Setting the KeepAlive setting to false results in a much slower program…

Setting the registry key didn’t help.
We didn’t get the permission to disable tcp protocol autotuning.

We just had the SuperChannel timeout problem with another of our clients in an entire other program… We are starting to get worried and without vpn connections…

As well as do I

Ok. Let’s evaluate this one more time:

  1. When did this started to happen? Were there any changes in hardware or software prior to this?
  2. Is it possible to try in your environment a server and client app built using the latest v10 version (either with SuperTCP-base connections or with Http+Keep-Alive set to true)? There were changes for a possible connection leak (however that was for a client side)

If the situation persists then please add this code to the application startup (this should be literally the 1st lines):

		AppDomain.CurrentDomain.FirstChanceException +=
			(object source, FirstChanceExceptionEventArgs e) =>
			{
				Console.WriteLine(DateTime.UtcNow.ToString() + ": FirstChanceException event raised in {0}: {1}",
					e.Exception.Message, e.Exception.ToString());
			};

Instead of Console.WriteLine here should be a call to some logger that can persist even after the application exit. This code would allow to check if some exception prevents RO SDK from shutting down the sockets.

  1. It started to happen in march of this year. When it started it was a weekly or 2-weekly problem, but since June it’s practically daily. No changes in hardware or software. Before march it worked for over more than a year.

  2. We will install the v10 version next week and release a new build for both of the programs

Should we add this logging code to our server and client application?

Thank you for thinking with us.

Maybe a significant workload increase or amount of clients increase?
(BTW is the TLS protection enabled on the clients?)

Thanks!

Yes, if possible

Oh. One more thing.
If/when the server side is again in FIN_WAIT state then could you also run netstate on the corresponding client host to check if there is a set of CLOSE_WAIT connections?

The only thing that happened is they are in the process of upgrading the client pc’s to Windows 10.
But that’s not the case with the other customer who had the same issue today.

The number of clients increased over time from 65 to approximatly 90.

TLS Question:
image

Theoretically it could be that there is an issue with RO SDK socket management that had been revealed by Win 10 TCP protocol implementation (or bug either).

Anyway when this happens next time please try to gather both client and server netstat results.

I’ll do my best to gather the nestat results

Hello,

An update on the issue:
The problem occurs less frequently since upgrading to RemObjects 10. BUT when it occurs, they don’t get the ‘Actively refused’ or ‘SuperChannel timeout’ anymore. The application just hangs. After killing it with Task Manager and restarting they can continue working. We are waiting for a detailed list of how many times it occurs, which OS it occurs on, … so we can let you know.

Does this happen server- or client-side ?

Is it possible to attach debugger once this happens to see where exactly execution is paused / which exceptions are raised?

A post was split to a new topic: .NET RO server hanging if it is running over night

It happens client side. I just received an email it occurs daily now and sometimes multiple times a day.

It’s not possible to attach a debugger. Could it work if I ask them to create a dump file and debug it in my Visual Studio to see what happened?

I assume simple restart of the client app clears this out?

Yuo mean you would ask them to create a memory dump using Task Manager and then you will open this file in Visual Studio? Yes, this would help.

https://docs.microsoft.com/en-us/visualstudio/debugger/using-dump-files?view=vs-2019

Yes. The have to kill the client app with the Task Manager and reopen it. They are already making my ears bleed today because they are very annoyed with it :tired_face:

I just asked their IT department to create e memory dump if it happens so you will here from me soon.

Hello,

We at last got a dump file for this issue. As I’ve mentioned before, the application just hangs and with a restart of the client app (kill it using the Task Manager and restarting it) our client can work again. This happens between 2 to 5 times a day. Visual Studio and WinDbg gave me the following information:

If you want, we can send you the dump file, but it’s quite large. So we can use WeTransfer, but to which email address can we send it?

Thank you.

Please send it or a link to it to support@
However it is way too low-level.
Could you also provide a netstat -a result when the app hangs?