10 minute timeout ? (again)

Hi,

I thought I didn’t have a problem , but I do. Referring back to this 10 mInute Timeout? from a few days ago :-

"I have a server and batch (console app) client setup where the client initiates and waits for long running methods on a custom DA server (Oxygene .NET & SuperTCPServerChannel). The server sends periodic string updates via events which get logged to a database and the console. The method eventually finishes its business and terminates in an orderly way, reporting back to the client. The client then either continues on to the next process or terminates itself.

My only problem is there appears to be a 10 minute timeout that I’m hitting somewhere, which causes the client to disconnect right in the middle of whatever process the server is running. It doesn’t appear to be IPSuperTCPClientChannel.RequestTimeout which I set to 60000 * 60 * 3 (i.e. 3 hours). The IdleTimeout is untouched , defaulting to 0 which I believe means never disconnect.

Where else should I be looking for a timeout that would trip a disconnection after 10 minutes. This seems to happen no matter what DA server process I’m running…"

I’ve been testing further and simplifying the processing logic and I am definitely experiencing a disconnect. This does look to be an issue with my client / server architecture and not application logic as I’ve just created an iterative 30 second "report, wait, report " loop that stops reporting after ten minutes :-

The console application oxy021 is reporting events passed back from the Sum method. As you can see , after the 10 minute mark there are no further events reported back. I’m seeing exactly this in my real application code.

Any help with where to look would be appreciated.

Paul.

Hello

The 1st thing to check is to make sure that your session is not disposed due to timeout (10 minutes is the default Memory Session’s timeout).

Increase the timeout in the MemorySessionManager properties.

Regards

Thanks Anton, That was it.

What is the best practice approach for this situation :-

  1. Increase the timeout beyond the longest running process time. Not ideal, but a workaround in the short term.
  2. Call the processes asynchronously, only moving on to the next once any prior process reports completion . Whilst waiting for a process to complete periodically call a “keep alive” method on the server.
  3. A variation on 2), but call synchronously (as now) but on a separate thread call the “keep alive” method.

My short term solution will just be to increase the timeout to a large value - we don’t have huge numbers of sessions, so having a handful hanging round for a few hours will not be a problem. Longer term, a more correct approach would be better…

Hello

That’s a good question. What I would do is to do something like (please note that this solution might not be the best for you as I don’t know the exact architecture of your apps)

  1. Create a unque guid
  2. Enqueue the processing task and set its id to (1). Also store the session ID of one who raised the task.
  3. Return (1) to the client.
  4. Once the processing task is completed raise an event and send the id of completed task back to the client.

As for me this will give more flexibility in both server-side processing and client-side UI.

Another (but similar) option is to start the processing task and periodically poll the server for its completion results, w/o relying on the service-sent events.

Thanks Anton.

I think your final thought is closest to what I was thinking i.e. call the process asynchronously and then poll the server every so often until the process sends its completion information. Once that arrives, stop polling the server and move onto the next process (if any). The polling of the server wouldn’t need to be looking for a result, just a “keep alive” / heartbeat request.

The processing I’m doing here is very much old style batch work i.e. long calculations for logistics planning that require anything from 5 minutes to 5 hours to run through, depending on the size and shape of the customers data we’re processing.