Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles
(untagged)

Horizontal scaling .net C# / NodeJS, MongDB with Micro Services

0.00/5 (No votes)
29 Oct 2015 1  
Scaling application Horizontally using Message queue

Introduction

Many discussion are been made around over IOT (Internet of Things), how to handle its data? Is Micro service is way to go? Once the devices go online we will be having many request / data coming to server, response time matter for each and every request. Lot of ideas are been shared now days for scaling application horizontally.  In this Article we will be talking on refactoring .Net application which was developed in typical Monolithic Design. By writing this article I am just sharing a though process and try to get a feedback on this. 

In this article I am assuming you have basic understanding of .net / Nodejs (Javascript) , Message Queue and Database (Mongodb / SQL Server). Code base which is attached with this article is been developed using Nodejs, but it does not mean this design is specific to .net or nodejs. We can use any language which we are comfortable with. Multiple language can also coexists.

I would like to get feedback on this article. Please feel free to share your comments or you can also email me at shettyashwin@outlook.com. If you like this article, please don't forget to rate it.

Background

Let’s take an example here for our better understanding and then compare with the architecture design provided below. For example: We have device which continuous tracks a heartbeat and blood pressure of patient in the hospital. Heartbeats are send every 50 seconds and Blood pressure is been send every 15 mins. Patients are associated to a Location of a hospital. On regular interval patient data is been send to server via API exposed on our server. Application user can view this detail online from our website and can also monitor live dashboard. User can also search for patient or can query any specific detail.

In a typical server client application (at least most cases) we develop layer which are tightly mapped with each other. UI Layer communicates to Façade layer or Business layer, Business layer communicate to Data access layer. In heavy load situation we setup new box which holds the complete stack of application other than database with load balancer which will divert the request based upon the load & request coming in. But do all our components require equal number of additional resource?

Monolithic Design

In the above diagram I have missed to present some API extension, But its gives the idea which I wish the share.

One more point which I want to make here is does a single technology has all the plus we need in.  Java might have some plus & minus with its libraries and so do .net or any other technology which we might be working on. Also in future some technology may come as replacement for both.

Other Approach 

Let’s increase some performance requirement, now the duration for heart beats are moved from 50 sec to 30 sec and blood pressure interval from 15 min to 5 min. Now the amount of data we were receiving / processing is been complete doubled. But user interaction is still the same. Do you really thing adding some more nodes would be cost effective? Also think about how do we deploy new updates when we are continuous getting heartbeats and blood pressure. Losing such critical data might create a life and death situation.

Now let’s look at the completely different but more reliable and cost effective way to design and handle such issues. 

Micro Services Design

If you look at the above diagram, the key difference each module is a separate service for processing a certain set of requriement. They are not tightly coupled with the other modules.  Let start looking at each section in more detail to understand the benifit of it.

Server Code

If you look at the second diagram, we have created each service independently. Each service can run in its own process thread. It can be hosted, deployed and maintained separately. This gives us the flexibility to maintain services as a smaller code block. You can take call on how micro service can be, based upon your product requirement and usage. You will get better understanding from here

Second and most important part in the micro services diagram is connecting your services with the messaging queue. Introducing message queue will help us in plugin new nodes as when need arise.  In a typical Monolithic design when a request comes in data will pass from top to bottom and bottom to top. But by introducing queue we will move from this typical flow.

In the queue approach top most layer will receive the data from UI, runs its validation on the data received. If received data is valid, data will be converted into serialized message and pushed into message queue. Let’s take two examples here.

Data for Heart beats:

IOT device in hospital starts sending the information to our API. This request is first authenticated and data validation parameters are verified. Now after successful validation, data is published in Message queue. Subscriber interested in this message will pick up this information and start processing it further. In the scenario mentioned the background section we assumed that data which we were receiving from this IOT have doubled. So to handle this scenario we will be increasing the number of subscriber who can process this information. It’s totally our call to whether or not we need a new box to process this data or same can be handled by box which is hosting the application.

If you are using RabbitMQ, logic publishing message is available here

Do not forget to use the acknowledge mechanism in the message queue. This will be handy if any of the process gets kill or results into error. Also here since the message are been published in the message queue, if we are restarting subscribers or if we are updating logic written in subscriber messages will get parked in the queue and will be process once subscribers are back in action.

Request for displaying list of patient:

 

In the typical data request (without queue), we take the parameter and call the underneath layer for fetching the required data in the single thread. But in this specific approach we will not be calling the underneath layer but after the data validation, message will be pushed inside message queue. When we do this first question comes in our mind is since this is get request we need to get back with response which was requested by user. For achieving this we will be using RPC mechanism which is readily available.

For achieving request response we need to make sure we follow 4 steps

Step 1:

We set the reply_to attribute of the message. Name mentioned here will be used by subscriber to return back the data generated for response.

Step 2:

Set the unique correlation id. Subscriber of this message will send back the correlation id with data. Once the data is been received back this Id needs to verified to make sure the data received is the once which is been requested for.

Step 3:

Subscriber of the message should make sure that after processing the data, message are send back to the queue mentioned in reply_to with the correlation id .

Step 4:

After message is been published (as per step 1), publisher need to subscribe to queue mentioned in reply_to queue for getting response. On receiving the message correlation id mentioned in the received message needs to be verified with Id generated while publishing he message. 

You can increase the number of subscribers if see more request coming in 

Code / Logic for same is available here

Note : I am sharing the link for code at RabbitMQ website, since this example are pretty helpfull and also they are available in muliple langauges.

Database

In this approach I have considered MongoDB database. Based upon the scenario you are in, you can choose to use no of Instance you would require and database which suits your product (Relational / Non Relational). In the above diagram key difference is they are part of the same cluster to handle heavy load. Whenever CRUD operation is performed they are been sync across the all instance.  

Getting MondoDB Configured on Server (Windows)

Download latest of MongoDB setup from here. For easy access I would suggest you to configure the bin folder path of MongoDB installation in Path variable (Windows Environment Variable). This will allow you to quickly access MongoDB commands / Exe. Once you configure the MongoDB path in Environment Variable, Create three folders data, log, config inside a parent folder. 

Data Folder: Will store data files of MongoDB instance

Log Folder: Will store log files for MongoDB Instance

Config Folder : Will store config file running MongoDB instance.

Mongod.conf File Details

dbpath = [Data folder path in which data files will be created for current Instance]

port = [Port on which MongoDB will listen]

logpath = [Log file path in which log file for current instance will be created]

If you are trying to create MongoDB cluster in same server (for R&D) than create multiple copies of parent folder. Make sure you are updating Log & Data folder path in all the config files. Also all the config file should have different port numbers.

To run the instance of MongoDB in batch mode you need to execute following command

mongod --config [Physical path of Mongod.config file created in Config folder]

Configuring MongoDB in Cluster is quick and simple. 

Step One :

First you need to define the replication set name in the database config file. This need to be set in all the database config which are part of this cluster.

replSet=[NameOfReplication]

Step Two : 

Configuring list of MongDB Server 

If you have configured mongodb bin in path environment variable, open command prompt and type mongo and press enter. This will automatically connect you default mongoDB instance running on 27017 port.   For fetching cluster information type rs.config() or rs.status() and press enter. Mongo shell will list the cluster detail if it is already configured. Now to add first cluster information you can use rs.initiate()

rs.initiate({
   "_id" : "[NameOfReplication]",
   "version" : 1,
   "members" : [
      {
         "_id" : 1,
         "host" : "[MongoDBServerName:Port]"
      }
   ]
})

For second or additional server you can use rs.add("").

rs.add("[MongoDBServerName:port]")

Now you have complete set of MongoDB instance running in a Cluster.

For Elastic search intergration you can use message queue for getting data change update and than update the elastic search accordin

Points of Interest

Micro Services : - http://martinfowler.com/articles/microservices.html

Rabbit MQ Tutorial : https://www.rabbitmq.com/getstarted.html.

MongoDB : https://www.mongodb.org/ & training https://university.mongodb.com/

 

 

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

A list of licenses authors might use can be found here