Click here to Skip to main content
16,004,574 members
Please Sign up or sign in to vote.
1.00/5 (1 vote)
See more:
I have a linq queries in my .net core web API endpoint that takes to long to execute and client are timing out before the execution finishes. Below is my setup:

Table A: Contains 100 million records
Table B: 100 million records
Table C: 100 million records
Table D: 100 million records

Endpoint Payload: Contains 100k of list of Table A entities.

Process: To insert the payload in Table A, for each records in the Payload, check each of the Table A, B,C and D to make sure that certain business logic constraints are kept before insertion.
If any constraint is violated, the whole operation fail. That means each records of the 100k must be checked against the 4 table.

After successful check, a record is inserted in Table A and the ID is use to insert a corresponding record in Table B. Then the loop continues.

Server Configuration:
Windows Server 2019, IIS Server, SQL Server 2016
8 core, 16 G Ram, SSD

Request Pattern:
Sometimes, there can be 20 concurrent request from different clients with the above Payload volume.

Please, help me in the best way to optimize this flow even if redesign is needed.
Thanks

What I have tried:

Have tried looping through the payload
Have tried first checking constraints, then loop to insert, and store id in list, then construct payload for table b and bulk insert
Posted
Comments
PIEBALDconsult 11-Aug-24 16:13pm    
Don't use Linq.

1 solution

This is an interesting problem for us to consider. Suppose you have hard constraints around the system that you have to operate in something akin to this process, then we have a situation where we need to consider how we could solve the issue where we want to keep the core of the system, but address the issue where clients get hard failures when connections timeout.

So, how have I done something similar to this? Actually, I have - and the way that I solved this was to use event based processing. The way to think about this is, have the data being saved in one stage as an interim stage, and then raise an event to perform the separate processing using something like RabbitMQ. This is the core of the processing - when you have saved the data, give the client a token so that they can come back and check the status of their request later on. In the event processing, I would look at using parallel processing if possible to improve the way you deal with this, and I would separate the processing into logically separated processes that I could scale out horizontally.
 
Share this answer
 
Comments
Member 10316012 11-Aug-24 17:55pm    
Thanks very much @Pete O'Hanlon. We have taugh about using message broker like RabbitMQ also has you explain, the only obstacle we are looking into is some of the client that are posting this payload are mobile device in a retail environment. They need to wait for the transaction response . We want to provide user experience like that of banking transaction where by you transfer money and you wait and get success or failure response.
Did you thing queing architecture with RabbitMQ can work for this scenario ?

Note, for other client that are not mobile devices, we will go for RabbitMQ implementation.
Pete O'Hanlon 12-Aug-24 3:58am    
What you haven't addressed is how the connection is being established. I could make assumptions that you are using HTTP calls, but you might have decided to adopt something like protobuf. That's going to have a lot of bearing on the techniques you are using to keep the connection alive in the first place.

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)



CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900