IpSuperTcp channels: SuperChannel TimeOut or Actively refused

Did you try to adjust the settings?

Setting the KeepAlive setting to false results in a much slower program…

Setting the registry key didn’t help.
We didn’t get the permission to disable tcp protocol autotuning.

We just had the SuperChannel timeout problem with another of our clients in an entire other program… We are starting to get worried and without vpn connections…

As well as do I

Ok. Let’s evaluate this one more time:

  1. When did this started to happen? Were there any changes in hardware or software prior to this?
  2. Is it possible to try in your environment a server and client app built using the latest v10 version (either with SuperTCP-base connections or with Http+Keep-Alive set to true)? There were changes for a possible connection leak (however that was for a client side)

If the situation persists then please add this code to the application startup (this should be literally the 1st lines):

		AppDomain.CurrentDomain.FirstChanceException +=
			(object source, FirstChanceExceptionEventArgs e) =>
			{
				Console.WriteLine(DateTime.UtcNow.ToString() + ": FirstChanceException event raised in {0}: {1}",
					e.Exception.Message, e.Exception.ToString());
			};

Instead of Console.WriteLine here should be a call to some logger that can persist even after the application exit. This code would allow to check if some exception prevents RO SDK from shutting down the sockets.

  1. It started to happen in march of this year. When it started it was a weekly or 2-weekly problem, but since June it’s practically daily. No changes in hardware or software. Before march it worked for over more than a year.

  2. We will install the v10 version next week and release a new build for both of the programs

Should we add this logging code to our server and client application?

Thank you for thinking with us.

Maybe a significant workload increase or amount of clients increase?
(BTW is the TLS protection enabled on the clients?)

Thanks!

Yes, if possible

Oh. One more thing.
If/when the server side is again in FIN_WAIT state then could you also run netstate on the corresponding client host to check if there is a set of CLOSE_WAIT connections?

The only thing that happened is they are in the process of upgrading the client pc’s to Windows 10.
But that’s not the case with the other customer who had the same issue today.

The number of clients increased over time from 65 to approximatly 90.

TLS Question:
image

Theoretically it could be that there is an issue with RO SDK socket management that had been revealed by Win 10 TCP protocol implementation (or bug either).

Anyway when this happens next time please try to gather both client and server netstat results.

I’ll do my best to gather the nestat results

Hello,

An update on the issue:
The problem occurs less frequently since upgrading to RemObjects 10. BUT when it occurs, they don’t get the ‘Actively refused’ or ‘SuperChannel timeout’ anymore. The application just hangs. After killing it with Task Manager and restarting they can continue working. We are waiting for a detailed list of how many times it occurs, which OS it occurs on, … so we can let you know.

Does this happen server- or client-side ?

Is it possible to attach debugger once this happens to see where exactly execution is paused / which exceptions are raised?

A post was split to a new topic: .NET RO server hanging if it is running over night

It happens client side. I just received an email it occurs daily now and sometimes multiple times a day.

It’s not possible to attach a debugger. Could it work if I ask them to create a dump file and debug it in my Visual Studio to see what happened?

I assume simple restart of the client app clears this out?

Yuo mean you would ask them to create a memory dump using Task Manager and then you will open this file in Visual Studio? Yes, this would help.

Yes. The have to kill the client app with the Task Manager and reopen it. They are already making my ears bleed today because they are very annoyed with it :tired_face:

I just asked their IT department to create e memory dump if it happens so you will here from me soon.

Hello,

We at last got a dump file for this issue. As I’ve mentioned before, the application just hangs and with a restart of the client app (kill it using the Task Manager and restarting it) our client can work again. This happens between 2 to 5 times a day. Visual Studio and WinDbg gave me the following information:

If you want, we can send you the dump file, but it’s quite large. So we can use WeTransfer, but to which email address can we send it?

Thank you.

Please send it or a link to it to support@
However it is way too low-level.
Could you also provide a netstat -a result when the app hangs?

Hello,

Here we are again. A new year, same problems… So, the problem got a lot better since our last post, but now it just exploded. “Connection refused” problems every day when some users start the application (if the application runs, it’s mostly ok), sometimes multiple times… The last thing we tried was to disable DeepSec serverside, but it didn’t help. We’ve attached the netstat files (baseline when everything is ok and a file when the issue occurs). Our application runs on port 8090 TCP-IP.

Thank you.

NetstatABN Without DeepSec With issue 26 02 2021.txt (46.5 KB) Baseline NetstatABN Without Deep Sec 24 02 2021.txt (22.9 KB)

Hello

There are several interesting yet worrying entries in the files you provided.
At first let’s take a look at connections to the port 8090. Almost all of them are in ESTABLISHED state as they should be.

Now let’s take a look at something that listens at port 82 (AFAIK it is something Tor-related)
There’s a dozen of leaked sockets for connections on this port (TIME_WAIT status).
Also there is a lot of leaked IPv6 connections (lines like

TCP [::1]:58823 [::1]:83 TIME_WAIT

)
Also there are leaked sockets described by lines like

TCP 10.198.51.6:58773 10.197.18.196:443 TIME_WAIT

Looks like something tries to connect via https to some host and fails without closing connection properly.

Please note that none of these ports are related to your server app.
Could you contact your network administrator and check if there is any unwanted software on your server host?

Hello Anton,

Sorry for the delay. I’ve sent your concerns to my client. To be continued.
Thank you very much for taking the time for this problem.

Sincere greetings.