I have tested the RO server requests pr. second response with a RO.Net and a Delphi 10.1 RO server. Both were using your Http.sys servers and in the Service Tester I was using the WinINetHttp client channel. I used Oxygene on the .Net side.
In the Service Tester I was running 50 threads and 200 requests each.
In all tests the RO.Net server kills the Delphi server. I am seeing almost a doubling of the requests pr. second using RO.Net.
This surprised me as I would have thought a natively compiled Delphi server would perform much better.
Is performance difference a result of different coding practices between the different platforms or that .Net just performs better?
While this is definitely something worth having a look at (performance should be comparable, all else being equal), maybe it’s a data point to dispell the “native = better” myth that Embarcadero have been hitching their waggon on for the past 15 years
Tested with a TMS Sparkle HTTP server (from TMS software) compiled in Delphi 10.1. This server is a wrapper around the http.sys stack like to ROWinHttpserver.
For this server I used JMeter for testing.
On my local Win10 VM I get an average of 10,000 requests pr. second compared to between 1300 - 1400 with RO.Net.
If I put this server on Azure I get an average of 860 requests pr. second compared to between 34 - 40 with RO.Net.
Both servers are just returning back a “Hello World” string.
I’m totally interested on what RO can find about this, the difference is very noticeable to the point that could be a code bottleneck on RO.
There were several interesting performance benchmarks done a while ago where RO was included.
On that time the conclusion was Synapse and ScaleMM2, but thats for Delphi as the language, once that is gone, it goes the next level, the framework itself. Mormot is a beast, they are definitely doing something good there.
Have to mention several things that might affect the mesured performance:
JSON message is not the most performant one in RO SDK. After all, reading it involves parsing strings which is not the fastest operation by definition. Binary message provides way better performance and way less traffic consumption compared to JSON
Bin message has optional data compression feature enabled by default. While this option speeds up data turnaround on real network environment it will actually decrease request/second rate on localhost tests.This happens because it is way faster to send a data packed thru the loopback interface than to compress reequest on client / send it / decompress request on server / compress response on server / decompress response on client
One needs to look at the Delphi RO SDK framework here because a simple comparison between TROWinHttpServer, TMS Sparkle Http.sys server and mORMot, which also uses the Http.sys kernel, is telling:
JMeter used as testing client, JSON message calling a simple “HelloWorld” method:
mORMot: 50 threads, 150,000 samples: 10,502 requests pr. second
TMS Sparkle Server: 50 threads, 150,000 samples: 10,025 requests pr. second
TROWinHttpServer: 50 threads, 150,000 samples: 507 requests pr. second
The best performance with RO is from the TROIpHTTPServer (Synapse): 50 threads, 150,000 samples: 2283 requests pr. second.
Using a single machine for testing is not ideal, but it does give an indication of performance.
Hi Thom,
I have done an experiment with the servers/channels now and I think there is a problem with the indyHTTPClient. The server seems OK. Using the indyHTTPServer with the SynapseHTTPClient on a single threaded test gets me a throughput of 29,904 requests per second. The Indy client channel manages just 3535 requests per second. To be clear both of those tests were performed on the exact same server.
Could you let me know if you can repeat this on your test please?
The memory manager will make a difference with lots of threads and lots of objects being created and destroyed. Benchmark IntelTBB (you can find the code on similar sites to the ShareMemory code you have) and ensure you download the latest DLL from the official intel site.