Server memory increase

Hi,

I am testing a .Net RO server by using JMeter and pushing it with some heavy load. I notice the memory of the server increases quite a bit and does not seem to go down. It can start at around 30mb and reach 150mb. The service method that I am testing with does some database queries and returns a JSON string. I use the using statement for all ADO.Net objects (SqlConnection SqlCommand, etc) so it should be disposing correctly. Is this to be expected in a .Net environment?

Thanks,
Thom

I have no experience with Remoting SDK, but a lot with .Net
And one of the drawbacks I always saw with .Net is the heavy memory usage.
30Mb - 150Mb for a big project is not something to worry about.

More about Garbage Collectin see: Fundamentals of garbage collection - .NET | Microsoft Learn

Note this point in this topic:
Reclaims objects that are no longer being used, clears their memory, and keeps the memory available for future allocations.

This means that a dispose of an object clears that memory, but it doesn’t have to be given back to the OS, meaning that, application wise, this memory will still be in use. This is done to lessen the overhead of OS memory allocation.

Top add to/clarify what Theo said: you cannot judge a .NET process’s memory load by looking at task manager, since the ,.NET runtime maintains its own memory pool, and doesn’t ask and give back to the OS for every single alloc.

For one, GC maybe run lazily and reclaim memory later than possible, if it’s not needed yet; for another, even after memory is reclaimed by the GC, the runtime might not release it back to there OS immediately, but keep it for itself for future re-use (but would free it, if the OS needed it for a different process).

All of that is not to say that there might not be a leak in your app, ofc.

Hello

There might be several reasons for such increase.

At first it might happen when the database connection is opened for the first time. At this moment .NET runtime might load and initialize DB access driver with all its codebase and internal pools.

The second thing you need to think about is how exactly do you compose that JSON string? Operations like

stringValue = "{" + someOtherStringValue + "}"

have its effect on the internal .NET strings pool. In case of heavy string operations (like composing JSON string manually) one should use StringBuilder instead of plain string + string operations.

And the third thing as stated above by marc is that .NET applications have their own memory management and garbage collection code. It might well be that .NET runtime will wait until some memory consumption threshold is met before running Garbage Collector. So it might be not a real memory consumption increase but just an unallocated memory that is still marked as owned by the application.

If you want you can send us your project and database dump with some sample data so we’ll analyze the memory consumption (tbh I am a bit curious too what exactly causes this memory consumption peak).

1 Like

Coming from the Delphi world I am just not used to that kind of memory consumption.

My test environment contains two .Net RO servers: a frontend and a backend server. The frontend server takes calls from a browser client and then calls the backend server which does the brunt of the work.

I drove both servers hard with JMeter using 30 threads looped 40 times. This was done 5 times with a roughly 30 second pause between loops.

The memory increased to about 260mb on the frontend and around 320mb on the backend server. But it kept going up and down so I did see the GC at play.

When I took a break and let it rest for about 45mins. the memory had decreased to just under 100mb. So definitely some GC cleanup had taken place.

When I use a memory profiler on the backend server I do see quite a bit of strings that does get cleaned up + other objects. So something seems to be leaking.

Both servers handled the heavy load, though.

// Thom

It is different. Not worse or better - just different. So it requires different approaches. Usually .NET Framework (especially versions 4.5 and above) and .NET Core are able to manage memory efficiently out of the box.

Could you check where exactly these objects were allocated? If you’d run the load test again several times are there new objects allocated or the old ones are reused?

Hi Anton,

It seems the old ones are reused. I am using the Redgate ANTS Memory Profiler and I do not see any size diff (in bytes) or between calls and live instances is kept to 1.

What does increase on each call are system files like System.Byte, System.string, System.String[], etc. Are there known leaks in the .Net framework that you just have to live with? I am using 4.6.1.

// Thom

These might be as well some log entries and so on (some internals of .NET Framework that are not any leaks actually) or internal MS SQL data access buffers or something else.
Try to run your application in Release mode where debug info logs are suppressed.
If you are really worried about this you can try to create and send us a testcase where this is reproduced so we’ll look at this deeper.