I’m debugging our RO server app and stepping through server code a line at a time, and I want to prevent my client, which is also running, from timing out.
The client is producing a Winsock 10053 error after around 90 seconds of breakpoint inactivity.
We are using SuperTcpChannel and have already maxed out the normal client timeout settings
and our server has been prevented from timing out, by maxing out SessionDuration.
The following properties are maxed out on the client and do not prevent Winsock 10053:
AckWaitTimeout, ConnectionWaitTimeout, ConnectionTimeout, RequestTimeout,
IdleTimeoutMinutes, SkipAck = true, AutoReconnect = true
We believe at this point that it may be out of our control to prevent the winsock timeout error – does anyone know if there is a workaround?
Please note that my last post in a different thread resolved SERVER timeout issues. This thread is requesting help with CLIENT timeout issues. They are not the same.
Thank you
RequestTimeout is set to 3600000 (1 hour). I can’t setup a test case, it would take too long. I was hoping someone had run across this situation before and could help.
So let me ask you, maybe this isn’t an RO setting problem – are you saying that setting RequestTimeout should prevent the client from timing out when debugging on the server? If so, then this problem may be coming from our own code.
When I get a connection timeout - I was assuming that it could come from either the client or the server. But maybe I was wrong.
Is it true that connection timeouts only come from the server?
Of course, timeouts could happen on the server and on the client. For example : RequestTimeout (property of TROSuperTCPChannel) means that client didn’t get answer from the server in specified time, it happens on the client.
You could check how timeouts work on a simple testcase - create VCL standalone server with the help of our template, set TROSuperTCPServer in Advanced options of project template, And do some experiments with timeouts while debugging server.