Large number of TCP connections with Time_Wait status

Hello support.
We develop a server on Delphi, using TROIndyTCPServer component, and a client on .NET, using IpTcpClientChannel component.
Large load on the server leads to large number of TCP connections with Time_Wait status. Number of such connection may grow and
exceed system resources which results in errors when connecting to the server.
Another concern that such behavior will lead to DOS attack vulnerability when server is accessible via internet.

To reproduce:

  1. Run (on different computers in a local network) MegoDemo client from .NET examples and MegoDemoServer from
    Delphi examples.
  2. Activate Indy TCP server in MegoDemoServer.
  3. Select the TCP Channel option in MegaDemo client and click the Run Multiple Tests button.
  4. Using NetStat or TCPView tools you can see tcp connections in time_wait state.

Advise how to resolve this problem?

1 Like

Can anybody from RemObjects support help me with this issue ?

sorry for delay, we are working with it

according to TCP, this state is valid tcp state and it is equal to 4 min by default.
please read msdn how to reduce this value.

Of course, I am aware that TIME_WAIT is a valid connection state. But it does not mean that accumulation of such connections is a valid behaviour of server application! In my opinion, server application should never initiate disconnection, so TIME_WAIT connections will accumulate on the client side (which is obviously less critical).
Just want to repeat once more, uncontrolled accumulation of TIME_WAIT connections on the server side might be a serious problem (used for DOS-attacks on publicly available servers, for instance).
So, I would like to get some recommendations how to build such servers in a reliable way, or get official confirmation from RemObjects that RemObjects SDK can not be used to build such servers.
P.S. I am interested in using TCP Indy channel because of performance reasons. We got best performance results using TCP Indy on the server side.

in general, better to ask about TIME_WAIT connections in Indy community, because TROIndyTCPServer is layer over TIdTCPServer.
direct access to TIdTCPServer can be received via IndyServer property.

I was hoping that this will be looked by RemObjects as providers of SDK. Of course I can look into this myself, I even can write server without using RemObjects SDK at all.

What is purpose of support from RemObjects SDK is this case ? Just say some obvious things ?

I bought product from RemObjects. So I need answer from RemObjects how this problem should be handled.

TIME_WAIT is a common problem with tcp and its outside the scope of RO SDK.
See this post for more details.

Also Avoiding the TCP TIME_WAIT state at Busy Servers article can be useful for you.

I KNOW IT IS COMMON PROBLEM. You can stop repeating this.

Thats why I ask - how RemObjects handle it in THEIR product (RemObject SDK)?
I bought RemOBject SDK to build publicly accessible application servers.
How you propose me to make them reliable using RemObjects SDK ?

Are you telling me that it is not possible (outside scope of RemObjects SDK) ?
Well, in this case you should mention it more clear on product description page.

I’m not sure what you expect us to do. You have been told the common way of fixing it and you are the first that I remember asking for this.

You are testing an extreme condition and, imo, the setting of a lower value of Time_Wait should solve it.


Mike Orriss
CFO, RemObjects Software

Ok, will try to explain my expectation.

I expect that when I use tool designed to build servers, this tools will handle all “common” problem itself.

For example, when I use Microsoft IIS, or Apache - I am not willing to resolve problems like TIME_WAIT connections. it is all handled internally.

Could you please repeat for me this “common way of fixing” ? What exactly you have told me to do?

I will

Personally, I use keep alive connections with the nagle algorithm switched off. Then you won’t have an issue with time_wait.

Are your tests real world? do they represent actual usage of your application? if you will have a small number of users creating a lot of requests over a long period then use keep alive connections. If you will have a very high number of users creating only a small number of requests then read on…

You are testing the application on a lan so can exhaust the available connections easily. On a windows server 2003 with default configuration you will get about 33 connections per second before running into trouble. On newer servers and after reconfigurig the tcp time_wait to wait for less time, and increasing the available pool of ports, you can push this up to 500+ connections per second.

This really is an operating system configuration issue rather than something specific to RO. Your argument that apache and IIS handle this for you does not appear to be correct as there are plenty of people asking the same questions about those.

good luck.

I think the knub of the issue is that the OP feels that a TCP server should not acquire time_wait connections. The side of the connection which initiates the disconnect is left with the time_wait port. The client should initiate the disconnect and so the client should have the port in time_wait.
This is different from http connections in that the server often responds and then disconnects.

The OP appears to be saying that the RO server is initiating the disconnect. Which it appears is correct as the following code exists in the uROIndyTCPServer

if not KeepAlive then

This is the issue. It should be the client calling disconnect to stop the server from accumulating time_wait sockets.


Thank you Will. KeepAlive in conjunction with disabling Nagle algorithm is that it was necessary.
I think that the component TROIndyTCPServer must be initialized with KeepAlive = True and DisableNagle = True by default.

I’m sorry, but changing of default values can break logic of existing projects…