Using ROHTTPServer to serve REST clients - is GET body response size limited somehow?

Hello,

I have just discovered some kind of issue or at least difference between the Indy and the RO HTTP Server when serving in conjunction with the API Dispatcher facilities to serve REST clients.

I have a method that returns a catalog and in some cases the response is somewhat big (3.07MB says Postman). Doing this with the ROHTTPServer enabled results in a interrupted connection, according to cURL (cURL error 18) after some bytes being downloaded (around 1MB IIRC). Postman doesn’t show any result, even when the response code was 200.

Changing to IndyHTTPServer, all else being equal, solves this issue.

Connecting to localhost with either server works fine. But connecting from “the outside” is where it fails. Now, the “outside” means that this is being port-forwarded by my firewall, and I would be willing to blame some of those steps in the process, but seeing that the Indy server works fine, I think there is something else going on. I haven’t been able to try this from the outside without any portforwarding.

Any idea on how to track or solve this? I have the gzip compression enabled in both servers, with the same cutoff (4096, the default IIRC), Keep Alive is also false, although it was enabled before with no difference.

Doing the request from Postman takes around 1200ms to complete from “outside”. Doing it to localhost takes around 350ms.

I forgot to add: I am using version 10.0.0.1579 of RemObjects SDK

Hi,

can you grab server responses for both servers with HttpServer.OnWriteToStream event and compare it, pls?

they should be equal for both servers for the same request

I’ll add some logging for that and will post the results here.

Ok, so, running locally at least, there is no difference in the contents of the stream for this request. I’ll try to see if there is some difference when running “remotely”, and will also try to capture the response via WireShack or something similar…

Hello EvgenyK,

I don’t see any difference in the streams being written, when logging the OnWriteStream event.

Running this on WireShark there does show a lot of differences, but I don’t have the needed knowledge to interpret the analysis. What is very clear at least is the lenght of the packages sent with the Indy HTTP server:

And the RO one:

I don’t know if this helps tracking down what could be the problem, or gives you some hint of what I could try modifying here?

Thanks!

Hi,

at least RO Server generates correct response. this is good.

the same :frowning:

try to change g_BufSize constant in uROSocketUtils.pas from 512 to 4096 or 16384 or similar value and retest.
will it improve situation?

Hello Evgeny,

I changed the buffer size to 8192 and the call completed correctly. I haven’t tried other values, so I don’t know if it’s something related to that buffer size in particular or just something bigger will work.

I assume you haven’t replicated this problem there? It doesn’t seem to be related to the data being returned, but I can send you the output and maybe you can try replicating this just returning that buffer?

Edit: I have changed it to 16k and it still works. It appears to be more efficient (takes around .5 seconds less than with 8k as the buffer, from 3 to 2.5 seconds). The “way” the request is presented/processed by Postman is somewhat different than with the Indy server… feels less snappy. I am changing this to 4k and see what happens, to make then a comparison with Indy regarding the “latency” in showing the result (although I do remember it took more or less the same time with Indy).

Edit2: With 4k it works too. The “latency” seems the same than with 16k. It was probably the same with 8k, although I did make several tests and it usually took a little bit more. I’m changing to Indy now to try…

Edit3: I have changed to Indy and there is in fact a noticeable difference in the time it takes: The request with RO server takes around 2.5 seconds usually, with Indy it takes 1.2 seconds. It’s not that the way to process the result is different with one or other, now that I have payed attention to that, it’s just that it takes half the time to present the results:

With RO Server and 4k buffer:
imagen

With Indy Server:

imagen

Hi,

I’ve changed default buffer size to 16 kb and added possibility to change this value in runtime.

Users reported that socket version became slowly since specific build of Windows 10 but we didn’t find reason for this :frowning: