Long (longer than the timeout) running methods: how to deal with them?

Hello,

I am implementing a web client for my app and one of the methods from the Server is the one that generates reports and returns the result (usually on PDF).

Some of the reports are going to be generated fast, but some other will probably take on the real world up to several minutes, maybe dozens, so I face the problem of the response taking more than the Timeout for the connection.

I thought that just moving to the _Async version of the methods would avoid the timeout generated by the transport but apparently thatā€™s not the case (or maybe I did something wrong). I searched here for info and found some posts pointing to the chat examples, which, for what I can see, use another technology, event sinks, which I havenā€™t used before, and I am not sure in a very uncontrolled settings (a web app) would be a good way.

So, before investing time there, I wonder if thatā€™s the ā€œcorrectā€ way to solve this kinds of needs? It was mentioned also to just change the timeout (or disable it) before the call to the long-running methods, which seems like a easy way to ā€œsolveā€ this, but Iā€™m not sure if itā€™s better.

For reference: the server is on C++Builder, and the client is on .NET.

Thoughts?

Thanks!

Hi,

I can recommend to use Async interfaces.

Changes will be affected to client-side only.

you can see usage of it in the ROPhotos/.NET sample

1 Like

Async really just affects client code semantics (whether the execution flow of the thread you can from blocks and waits, or whether you get an async response later. Under the hood, it still does a HTTP request and waits for a response (just asynchronously).

What you want, I think is to create an API that can queue off a task to be done on the server, and be done with that request. And then later get a response form the server when the result is ready (the server could send the result, in that callback, or you could make a second request to collect it).

EventSinks are a good solution for this. It will require a bit more administrative overhead on your end, but in the long run, will give yu a better API and UX for long0-running server tasks.

1 Like

May not apply to you, but I am building a WASM client that generates reports for the user & all done in my client Ap with the report in HTML which the user can see on-line & if he desires he can turn it in to a Word or PDF document. I donā€™t use the server, but without knowing what you are doing, I would consider that if the Server is required rather than roll your own, I might have the Server return the data to me to to build my own HTML report (cuz that is easy for the reports I make). --tex

Hello,

Thanks for the suggestion. Unfortunately we have 20+ years of reports implemented already, and moving the code to the client would imply implementing the same report in the different clients we intend to support. And using different report enginesā€¦ not an option for us.

Also, some (most) of the time spent on the reports is actually on running queries against the database, so the timeout might still occur even when doing ā€œallā€ on the client.

We are going to go with the async and the event sinks, as suggested by RemObjects, it just needs some reengineering to be able to handle this, which I am still not 100% sure how to do in our current code/architecture.

1 Like

Please let me know how everything works out when youā€™re satisfied cuz I could have the problem in the future, esp searching large DBā€™s. Good luck. --tex

Hi. Itā€™s a long time since weā€™ve done any work with RO but I have the same requirement (again involving server-side report generation which can take a long time, and where we are hitting the timeout limits of ASync calls. The single RO call may result in the generation of (say) 10 - 50 individual multi-page reports.

I see we need to do a queue and process this and in my case we need the queue to be multi-threaded so processing of the queue for one ā€˜callā€™ does not block processing of the queue for the next.

Does the RO framework hold anything to assist me with this or do I need to go back to basics?
Feels like I need :slight_smile:

  • Client makes simple (non-Async) call passing parameters of the requirement
  • Server adds to the queue
    [ * Server queue processor spins up threads to process queue entries, with an upper limit on the threads initiated ]
    [ * queue status updated ]
  • Event sink notifies Client itā€™s complete.

Itā€™s the bit in which I would appreciate guidance on (i.e. whether to do all this from scratch or whether RO can helpā€¦

Thanks

  • Server
    i.e. have a queue manager which spins up threads to service each of the queue entries, then

And just to confirm about the ASync timeoutā€¦
Iā€™m using TROSynapseHTTPChannel. Whatā€™s the maximum timeout I can set to give the server the longest possible response time to complete the task)?

Would it be better to use the TROIndySuperHTTPChannel (for example) or TROSynapseSuperHTTPChannel in the context of long-running processes, and setting the HTTPRequestTimeout to a very high level? Could that reasonably be set to 1800000ms (30 mins) to allow the ASync call to complete? Or is that unworkable?
Thanks in advanceā€¦

Hi,

you can implement everything w/o any issues.

  • client calls server-side method in usual way.
  • server returns TaskID (= guid) w/o delays and puts task into queue.
  • when task is done, server notifies client about completion
  • client calls another server-side methods and gets result.

You donā€™t need to change timeouts if you implement above solution.

Hi Evgeny
Thanks for your quick reply. That was what I had in mind.

I was mainly wondering whether there was anything in RO to help me with the actual queue/work implementation in a multi-threaded manner, or whether I need a separate service thread to manage the queue and spin up the threads to carry out the work.

i.e.

  • RO call server implementation pushes into a thread-safe queue
  • Separate thread pulls tasks from queue, creates new thread to service requirement.
  • service does work, raises event to notify of completion.

Do I just do that part the ā€˜conventionalā€™ way?

Hi,

you should have report generation as a ā€œblack boxā€:

  • input: some input parameters
  • output: it should call some event that report is done.

as for me, your code should be like:

  • client calls server-side and passes initial data
  • server-side method creates task and pass some things to ā€œblack boxā€
  • server-side returns some TaskID to client-side w/o delay
  • [ report generation is started on server-side ]
  • when ā€œblack boxā€ raises ā€œdoneā€ event - server-side sends sends event to correspondent client.
  • client calls server-side and received result

HTTP Chat sample shows how to send event to client-side from main form:

  reason := 'You''ve been terminated. Good bye!';
  if InputQuery('Close Reason', 'Reason', reason)
    then (EventRepository as IHTTPChatServerEvents_Writer).OnMandatoryClose(EmptyGUID, GUIDToString(clientid), reason);

Hi again Evgeny
Thank you.

My black box would be something to spin up a thread which runs the report. Otherwise long reports for an early request will block short reports (well, in fact all reports) for the next request, and thatā€™s something I dont want. Reports from different callers will go to different printers according to the details of the callerā€¦

Iā€™ll work on somthing around this.

Cheers