Windows Handles leak from .NET service?

Hi.
I’ve used RO for a number of years, but recently started to look more deeply into a resource issue encountered within a couple of our services; the bulk of the problem being poorly behaved printer drivers.

However, as a consequence I have been monitoring the number of Handles that windows services (using RO) hold over time, and I notice a growth in this even in relatively simple services with no real ‘outside’ contact, other than accesses to the Registry (to read info), Event subsystem (to write event log entries) and a remote SQL Server.

Typically I’m using C#, RemObjects SDK for .NET

In the system under scrutiny (deployed last year) I am using guild V8.3.91.1167.

This problem is not ‘massive’ but is somewhat alarming…
Memory usage is ‘good’, in a simple application delivering responses over HTTP I see 7,000+ handles on a minimally used

The handles seem to be type ‘Thread’. In this service application we are not creating any threads (ourselves) but are purely relying upon the RO framework to handle the pool of threads:

[RemObjects.SDK.Server.ClassFactories.PooledClassFactory( 16, PoolBehavior.Wait, false)]

I do also notice more than the expected number of handles into the system event log, even though its only the documented method of writing events which is being done.

However, I noticed that one of the services, otherwise quite similar in approach and technology (same RO, etc) was not exhibiting this issue, and found that intriguing. In both we have a management thread (created outside the RemObjects aspect of the application). It’s a ‘worker’ thread for our service to do some other occasional activities every few minutes (otherwise it is mainly in 100ms interval sleep state), and in there it was decided to make a call to GC.Collect().

So I added this GC.Collect() to the service which was showing ‘leakage’ (which also had an additional ‘CMC’ thread, in the same way. Hey presto… the Handle ‘leak’ no longer happens…!!!

But in my more ‘simple’ application which has no manually created ‘CMC’ thread, I do not have a ready place to add in the call to GC. Yes, of course I can create a thread to do this but I wondered if there was actually some leak in the way the .NET RO despatcher handles things…

We are not using any ‘STATIC’ stuff within the Implementation and AFAIK the implementation is quite ‘clean’ in terms of properly disposing of any objects it creates…

Are you aware of any areas in which RO may leak handles in the way described?

I intended to attach a small screenshot showing something slightly weird… but I cant seem to do that on this portal. I can send a .jpg if you can provide the method.

Thanks,

Clive

Hi

Does anybody have some thoughts on this?

Thanks,

Clive

Hello

If GC.Collect removes these handles then they are not ‘leaked’. They just wait until .NET will decide to run the Garbage Collector to fully collect them.

Btw this setting

[RemObjects.SDK.Server.ClassFactories.PooledClassFactory( 16, PoolBehavior.Wait, false)]

doesn’t affect any threads. It describes a way new Service instances are created. F.e. Standard class factory creates a new Service instance for any incoming request. The Pooled class factory on its turn does provide a pool of service instances to save some time on service instantiation. Here is a small caveat: because the Service instance stays alive (not Disposed) any instance fields of the service class are also not freed.This might result in some objects have a longer lifetime than expected. Service class provides a virtual InternalDeactivate method where one can clean such fields.

Regards

Hi Anton

OK, I understand they aren’t leaked…

But put another way, if I don’t use GC.Collect, I am seeing these very many thread handles building over time (thousands and thousands). Is this expected? Is it normal?

On your point about the PooledClassFactory, surely that does affect the way the listener despatches inbound connections and consequently whether or not it creates a new service instance or waits for a free one? Surely that is related to creation of threads to support the Service Instance, since each implementation runs in it’s own thread, doesn’t it?

The essential point is I have an RO service which seems to grow the number of handles over time, and I can’t explain why… Can you?

Thanks

Hello

Service instance is not tied to a thread. ClassFactory just provides an service instance to a calling thread. Different ClassFactories use different ways of acquiring instances, but they don’t use any threading stuff.

Unfortunately it seems that I know what causes the issue. Please take a look at multithreading - Handle leaks with .NET System.Threading.Thread class - Stack Overflow and https://social.msdn.microsoft.com/Forums/vstudio/en-US/24484104-07f0-4493-9f3c-f3145ab4257b/threads-creating-handle-leak?forum=clr (the very last post)

Still could you provide some more info, ie which runtime (.NET version) you use for your service and which exactly server channel do you use?

Regards

As well as the comment below, is there a place I can send you a screenshot illustrating quite how many ‘Thread’ handles I am seeing?

Hi

Thanks - at least there is some documented information about this issue so I am not imagining it all…

Some information:
I am using VS2010 to build a .NET Framework 4 solution. It is set for Any processor (32 or 64 bit according to platform on which running).

I am using RemObjects 8.3.91.1167; which seems to use .NET 2.0 framework

I use several channels according to the nature of the requirement and see issues with both:

IpHttpServerChannel (used when usage is from many clients with brief connections)

IpSuperTcpServerChannel (used when (small number of) clients keep a connection and perform many activities successively)

What else would you like to know?

Thanks,

Clive

Hello

You can send it to support@

Not exactly. If your server is run under .NET 4.0 then RO SDK also executes under this framework. Still it seems that MS is not going to fix this bug at all.

Could you try to run only one kind of clients to see which one causes faster handles leak?

It is odd that .NET itself doesn’t trigger the GC run earlier before the tread handles become a problem,

I’ll log this as a bug, but fixing it will take some time.

Regards

Thanks, logged as bugs://76531

Thanks for the update.

A service will use (only one or the other kind of channel; i.e. it creates either a SuperTcp or Http client channel. This is specified through code within the service. I would guess that the IpHttpServerChannel causes faster leaks, but difficult to say since I have GC.Collect in place within the additional thread of the service which uses IpSuperTcpServerChannel.

so…
pinpointAPI Http (with SSL certificate on the listener). No GC
pinpointCentral Http. With GC.Collect
pinpointPASHttp. No GC.
pinpointRODA SuperTcp. With GC.Collect

Whether or not we have GC.Collect is down to whether the service has the ‘extra’ thread doing periodic maintenance activities, into which we occasionally make a call.

Choice of channel is down to the nature of the client connections required/appropriate and will always be the same.

Thanks,

Clive

So what was the results? Btw this ‘leak’ should not be visible when SSL is enabled because it has bigger memory consumption and most probably will cause GC to do its work before the handles effect becomes noticeable.

Could you show the signature of the method you’re calling?
Also if you’ve running your service under memory profiler then most probably you have stacktraces for these Thread objects. Could you show them?

Regards

Hi Anton

Hope you are well. Do you require any further information from me at present?

Are you able to provide me with any new information on this? Has the company been able to commence investigations into this issue with a view to developing a resolution?

Thanks,

Clive

Hello

We’re working on it. It is not that easy to introduce the fix in a proper way (mean, not breaking the internal abstraction layers of the Connection and Channel objects). It requires significant refactoring, so it may (and will) take some time. We’ll ping you once we have a build containing solution for this.

Regards

bugs://76531 got closed with status nochangereq.