Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles / Languages / C#

Application Architecture - Driving Forces, Approaches, and Implementation Considerations

4.61/5 (13 votes)
9 Dec 2008CPOL23 min read 1  
This article discusses various driving forces, approaches, and implementation considerations involved in deciding the application architecture. There is no rocket science here - the whole objective is to aid you to decide an architecture that may suit your scenario.

Introduction

First of all, there is no 'universally correct' architecture that suits every scenario. The architecture may and should vary, based on project driving forces and requirements.

This article is almost about Application Architecture. I have seen people shooting questions like "Do you think we can use dependency injection?", "How to de-couple the layers?", "Where do I really need to put the business logic?", "Well, do we need to have separate business objects and DTO objects?", "Shall we go with Service oriented architecture?" etc. The answer is "Yes or No, based on your scenario" - Once you have the project structure in place, and once developers know where to write the code for a particular module or functionality, it is pretty easy to go ahead with the project.

This article is expected to discuss some practical concepts you may employ when you decide the architecture of your solution. There is no rocket science here - the whole objective is to aid you to decide an architecture that may suit your scenario. Also, you may find a lot of points in this article where you may opt to disagree. You are free to do that, and you are welcome to post your comments to suggest an alternative approach or to trigger a good discussion :). I reckon that some technology related scenarios are mainly from the Microsoft technology side, because I'm mainly working in the .NET space

By the way, if you are interested in software design and architecture, here are a few articles you might be interested in:

You may also visit my website http://amazedsaint.blogspot.com for a few more interesting recipes.

Decision parameters

Here is a minimal set of decision parameters for choosing your platform

Client choice/Preference

  • Step 1: Client's choice/alignment with technology platform.
  • Step 2: Existing infrastructure the client has in place to support any of the suggested technology platforms.

Cost/Budget

  • Step 1: Implications on hardware and software requirements (like hosting platform cost, infrastructure cost etc.).
  • Step 2: Implications on resource requirements (availability of resources, training cost etc.).
  • Step 3: Cost comparison of in-house development vs. outsourcing, for each technology.

Existing dependencies and Code base

  • Step 1: Is this an end to end project, re-write, take over, or a migration?
  • Step 2: If not end to end, do you have any existing dependencies and code base?

Platform maturity

  • Step 1: Find out whether the suggested platform can cover all requirements and future requirements.
  • Step 2: Can it satisfy interfacing requirements? (E.g., RoR is not mature for handling Web Services).

Available frameworks

  • Step 1: Are there any existing frameworks/Open Source tools already available? (E.g.: OSCommerce/PHP for E-Com).
  • Step 2: If yes, what is the investment for building expertise in this existing framework to reduce TCO?
  • Step 3: Do you have any existing frameworks/reusable components to reduce TCO?

Complexity and Size (Estimated using FP or something)

  • Step 1: Estimated size of the project.
  • Step 2: What is the cost comparison between platforms (based on finding out the time to execute for each platform based on the total FP)?
  • Step 3: Do you have any existing practices successfully applied earlier to execute the project in a technology platform?

Project model (Time and Expense/Fixed bid etc.)

  • Step 1: If Fixed Bid, which platform is suitable for quickest development and deployment?
  • Step 2: Do you have any existing practices successfully applied earlier to execute the project in a technology platform?

Project type (Desktop/Web App/Services etc.)

  • Step 1: Feasibility analysis of candidate technology platforms for this project (e.g., RoR is not suited for a Windows Service application).
  • Step 2: Eliminate technologies that are not feasible.

Interfaces to deal with

  • Step 1: Feasibility analysis of candidate technology platforms for interface support (e.g., if you are consuming REST services, RoR has a weightage).
  • Step 2: Identify platform specific interface requirements.

Expected scalability and Performance

  • Step 1: Identify support for OLAP/OLTP scenarios in the project.
  • Step 2: Identify minimum response time requirements.
  • Step 3: Weightage based on POCs constructed under various technologies or existing data or previous experience.

Internal expertise availability

  • Step 1: Weightage based on our internal expertise and resource availability.
  • Step 2: Training costs incurred.

Driving forces

First things first. I just need to break the myth that Application Architecture and Design is driven by technology. Technology should be chosen only to implement an architecture in the best possible way, and not vice versa. Also, architecture, in my opinion, is an ongoing process. The objective of architecture is to bring the solution closer and closer to the user expectation, and to unify diverse systems to provide standardization. Architecture is an ongoing process because, user expectations may change over time, as users thrive for better systems, models, and better experiences.

Some factors that may affect your architecture include

  • Politics with-in/between the organizations and stake holders (believe me, this is one of the critical factors)
  • Available resources and in-house expertise
  • How the system is expected to be consumed, and deployment considerations
  • Non functional requirements like Scalability, Availability, Performance, Security etc.
  • Project related driving forces like Cost, Scope, Quality, and Time
  • Re-usable components/entities available
  • Various risk factors (like unseen client expectations and change requests)

If you are an architect, you know that you are already squeezed up between one or more of the above forces. Here we go :)

Politics

Have you ever got involved in a migration project? You are on the receiving side!! In most cases, the guys on the other (legacy) side, i.e., the guys who are expected to support you in a big way by transferring knowledge about the existing system, are reluctant to do that. Simply because once the migration or take over is done, then they may go out of job, or they'll lose their significance in the organization.

So, in the end, you are forced to move to the design phase without actually understanding the existing system to be migrated. An alternative might be going for an iterative model, but still this is just one example of how politics can be a major factor that affects your architecture.

Available resources and in-house expertise

I believe that architecting a solution should be a team based activity. At least, there should be an architecture approval work flow to ensure that the architecture handles basic non-functional requirements like scalability, extensibility, security etc.

Building in-house expertise and documenting the success and failure factors are important so that expertise can be re-used. To do this, one suggested solution is to form focus groups, on areas like security, performance etc., to ensure that best practices are followed in the architecture and design. These teams should also evaluate the architectures of successfully implemented projects, to abstract best practices from the same. The success in re-using expertise across teams and building in-house expertise is a major factor that affects the final architecture.

How the system is expected to be consumed, and deployment considerations

How the system is expected to be consumed by users is another major factor that affects the final picture. Over a period of time, we have various considerations like Client Server, Distributed, Service Oriented etc., to name a few.

  • Example - 1: You are expected to architect a web based system to show the current temperature to the user (very simple, huh?). The user is expected to view the temperature using his browser, by visiting your website. After a month, the client you are working for needs to sell this as a service to other websites too, so that other websites can also use their service and show the temperature in their web pages.
  • Example - 2: You are expected to architect a web based shopping cart. Right now, you may deploy the Presentation Layer (UI) and Business Layer together in one server (i.e., the web server and application server will be one box), but you have to provide an option so that in future, when the user load increases, the business logic can be deployed in another machine (a separate application server).
  • Example - 3: You are expected to architect a web based shopping cart. Right now, you may be using application level caching. But in future, when your load increases, and when you go for a web farm scenario, how do you make sure that the cache is shared across all servers?

Non-functional requirements like Scalability, Availability, Performance, Security etc

Non-functional requirements are orphan kids - ignored by both the clients and the delivery team. Often, a performance tuning initiative or security push initiative is kicked off only during the final phase of the project, as a life saver.

Though NFRs are mostly ignored, they are one of the critical factors that has a direct impact on choosing the correct architecture. Wrong decisions are induced in architecture mostly because NFRs are ignored at earlier stages.

As I mentioned earlier, focus groups can contribute in a big way, to ensure that NFRs are considered properly during each phase of the development life cycle.

Project related driving forces like Cost, Scope, Quality, and Time

Some time back, when I was talking to one of my senior managers, he pointed out that it is not possible to give all the four decision factors (Cost, Scope, Quality, and Time) to the client to decide on. The client is expected to compromise on one factor. It is not because the client has to compromise. It is simply impossible to keep all these four decision factors as non-varying factors - when a project is considered. If some one chooses scope, quality, and time - then the cost will be the varying factor. If someone chooses Scope, Quality, and Cost, time will be the varying factor.

Unfortunately, this is not conveyed properly to the client, and the client will always end up choosing the cost, scope, and timeframe. As a result, this will hit the quality of the project - because quality is the only remaining factor.

The architect will end up in compromising on factors like extensibility and other NFRs, to meet deadlines.

Reusable components/Entities available

Reusability is the key. Shameless reuse of quality components and best practices should be enforced in the design and architecture stage. The availability of reusable components, factories, and best practices is one major factor that affects the architecture.

It is the responsibility of the organization to ensure that experience and best practices are documented, transferred, and reused properly.

Unseen client expectations and change requests

One major success criteria for an architecture is, it should not break when new changes are accommodated. Having said that, this is not the case in most scenarios. These days, most companies are very agile in their strategies. Hence, client companies may raise change requests often - which are triggered due to the changes in their business model.

The requirement analysis may miss out client expectations, which may cause serious architecture level issues later. Even worse, the clients may expect the product to satisfy all their needs, by default. These expectations may also trigger change requests later.

Approach - The thought process

Assume that once you have the bird's eye view of what needs to be done, a typical common starting point is to identify the tiers you need to have. For example, for a typical application, you may end up with something like:

Data Tier <--> Data Access Tier <--> Business Tier <--> Presentation Logic Tier <--> GUI

Now, a lot of questions will start arising. You'll start thinking about how these layers communicate with each other, how the data is passed up and down, how the data is persisted, how the layers are connected etc. Of course, the answers depend largely on the requirements and other driving forces we discussed above.

You might also think about various services you may use across tiers - like logging, caching etc.

Along with this, you may also think about how factors like performance, availability, security etc., are relevant and applicable in each layer.

Tiers in the system

Let us have a brief look at each of the tiers first. We'll go bottom up:

Data tier

The data tier is where you persist the data, mostly the database. The data tier expected to provide functionalities like storage, retrieval, indexing, and querying of data. When you consider the data tier - as I mentioned earlier - it is important that factors like transaction support, scalability, availability, security etc., are considered.

Data Access tier

Here, you often have the stuff to interface with your data tier. Two major factors you may need to take care of are the scalability and response time. It is imperative that you should have proper strategies to make sure that the response time is good enough.

For example, you may need to cache the data to avoid hitting the database each time. Another popular strategy is to pool your connections to the database. Normally, the Data Access tier is stateless.

Business tier

Sure enough, your business tier is there to apply business rules and transformations on the data, and to perform calculations. You may include business rule based validations in your business tier. In most scenarios, you might have a set of business objects for your business tier to operate on. Ideally, your business objects represent a domain specific model of your data.

The business tier receives data from the presentation tier in the form of business objects, performs required business rule validations and transformations, and calls the required methods in the data access tier to perform operations like storing and fetching. In most cases you may need state management in the business tier. For example, in a bookstore application, the business tier handles logic like adding books to the cart, calculating total cost etc.

It is certainly possible to implement services like caching and object pooling in the business tier also, based on the scenario.

Some other practical thoughts - from an architect's point of view, it is ideal to place all your business logic in the business layer. But some times, you have to do some trade offs for performance - and move a little bit of business logic to your Stored Procedures :). This decision depends on several factors - The primary factor is the size of data your application is expected to process. However, the key is to make sure that your Stored Procedures can be ported to some other database platform in future, with minimum overhead. Most database platforms are well evolved - and can handle load much better than the best possible business layer design and algorithms you may decide to use, for processing data.

Ideally, you should distribute the load wisely between all the tiers involved.

Presentation Logic tier

The presentation logic tier handles the presentation logic. For example, for a web application, the presentation tier holds ASP or JSP pages. For a Windows application, the presentation tier consists of the Windows Forms or something else the GUI layer is able to display.

Again, the presentation logic tier may also implement services like caching. The page output caching for ASPX pages is a good example.

GUI

The GUI is something the user interacts with. In a web application, the GUI is what the user sees in his browser. It is interesting to understand why the presentation tier is actually split up to presentation logic and GUI. This is because, in some cases, the presentation logic sits on the server, and the GUI layer will be on the client side. This is particularly true in the case of a web application, where the ASP or JSP pages are rendered and the resultant markup like HTML or WML is sent to the client

A different approach - Model View Controller

A common approach is to split an application to various tiers, as we just discussed above. However, a different approach is to view the entire application as a Model, View, and Controller (MVC architectural pattern). This is particularly true for web applications. In this section, I am not going to explain MVC inside out - the intention is just to convey the message that MVC is an alternate way of modeling an application.

Screenshot - mvc.gif

The Model

In MVC, the 'Model' represents the domain specific representation of data. MVC does not speak much about how the data access is performed with in the model - and the model is expected to wrap the data access and object persistence.

The View

The Presentation tier in MVC is split up into the 'View' and the 'Controller'. The View is expected to render the Model in a meaningful way to the user. For example, in a web application, if you have a model which consists of a set of Employees, the logic in vthe iew may iterate through the employees and emit HTML code to display the list of employees in the browser. It is perfectly possible that more views may exist for one model. I.e., the same employee collection can be rendered as a bulleted list by another view.

The Controller

The Controller handles user actions and gestures, and responds to user events. For example, when a user clicks the 'new' button to add a new employee, the controller for that action is invoked. The controller will then make changes to the employee model. The view will then render the modified employee model to the display so that user can view the new employee he added in the employee list.

In some MVC implementations, the business logic is wrapped in the Model, and is perfectly possible. On the other hand, some developers may choose to implement business logic in the event handlers within the controller. It is left to the developer to decide where he has to put the business logic.

A few additional notes: In the Web Client Software factory released by the P&P group in Microsoft, they introduced a variant - Model View Presenter. But it seems that Microsoft is heading for a pure Model View Controller framework for ASP.NET. See Scott's blog entry regarding the same for some interesting reading.

Implementation considerations

The next step is to consolidate all your 'architecture thoughts' to form a high level wire frame (in your mind). Don't confuse this with high level design. Even before you begin the high level design, you should have answers for some of the basic questions. For example, here is a subset of questions you may end up asking yourself:

  • What are the dependencies for the tiers involved?
  • How are the tiers plumbed to each other?
  • Where are the extension points?
  • What all services should I use, and where?
  • What are my persistence mechanisms?

Dependencies for the tiers involved

We discussed the tiers involved in the system - but still, we are not close enough to real world scenarios. For example, in the above definition of the business tier, we mentioned that the business tier is placed between the data access tier and the presentation tier.

However, in a real world scenario, your business tier may be connected to some other sub systems - i.e., your business tier may be subscribing other services for rule processing. For example, if you are developing a flight ticket booking system, your business tier will be communicating with various Web Services for performing operations like querying flight timing, booking a ticket etc. In such scenarios, you may need to use facades to access your sub systems and to hide the complexity of your sub systems

Another common scenario is you may need to separate the logic of finding the dependency between two layers from your actual implementation. For an example, assume that you need to invoke a Web Service to book a ticket from your application. The job of identifying "which" service to use and "where" to locate the service can be separated from your application, using techniques like dependency injection. Your application may know only "how" to communicate with the service. To give a simple scenario - you may consider injecting the dependency of the ticket booking Web Service to your business layer class.

There are various ways to inject dependency, and one way is property based dependency injection. Your business layer class may have a property with type ITicketBookingService. When this business layer class is instantiated, an instance of the proxy class (sure enough, this proxy class should implement your ITicketBookingService interface) of the ticket booking Web Service you need to use is created and assigned (injected) to this property. A detailed discussion of DI is out of the scope of this article. The Spring.NET framework provides excellent dependency injection capability. Also, Microsoft ObjectBuilder (which is used in factories like web client software factory) can be used for the same. (I still wonder why there is no separate 'Dependency Injection Application Block' in the Enterprise Library.)

How the tiers are plumbed to each other

Let us start with an example. Right now, you are developing a web application, and you have your presentation layer consuming your business layer classes directly. You simply create an object of your business layer class in the code-behind of your ASPX page, and call methods in your business layer object to pass data up and down. The development and QA is almost over, and you are waiting for the approval to move the project to production.

One fine morning, you find a mail in your inbox from you technical manager - saying something like "Dude, let us move the application to production. But to maintain the load, we'll deploy the presentation tier in the web server, and the business logic in a separate app server." You are in a soup, because your architecture does not support distributed logic - and you simply can't deploy both these layers separately.

Hence, as I mentioned earlier, it is imperative to consider factors like this during the initial stage. One common approach to build distribution of logic into the picture is bringing in a proxy tier, between the layers. For example, you may put a proxy tier between the business tier and the presentation logic tier. The objective of the proxy tier is to expose the functionalities of one tier, so that the next tier can access it - to facilitate the need of distributed computing:

Business Tier <--> Proxy Tier <-->Presentation Logic Tier

You may use protocols like SOAP, RMI, DCOM, CORBA, etc., for your proxy tier, based on the scenario. For example, if the presentation logic tier and business tier are developed in .NET, and both are expected to be deployed in the same LAN, you may go for .NET Remoting. If the business logic needs to be exposed as services and needs to be accessed outside the enterprise domain by heterogeneous clients, you may go for SOAP, and so on.

When you use classic remoting technologies, like DCOM and RMI, the contract (methods and data types used by those methods) between two tiers is pre-defined. How ver, in Service Oriented Systems, the client can dynamically discover the contract to use the same. For example, a Web Service will expose its contract using Web Service Description Language (WSDL), which provides efficient decoupling.

Sometimes, you may need to go for defining separate data contracts or data transfer objects (DTOs) that can be send out and received back by the proxy tier. Then, internally, the proxy tier needs to convert the DTO objects to Business Objects, and vice versa. Though this conversion is an overhead, this makes sure that the contract won't break even if your domain object model is changed.

Where are the extension points?

Ideally, your application should have enough extension points wherever possible. For example, if you are communicating to three services for a single purpose, tomorrow you should be able to add a fourth service to your application without any code change in the core framework.

Screenshot - provider.jpg

A classic way of doing this is using the Provider pattern to define extensibility points. Using a provider based approach will also help you to solve various scaling issues in future. For example, assume that you are caching data in your web application. Normally, you use the default cache, which can cache data only in the current application domain. Tomorrow, if you are moving to a web farm scenario, where you have to use a shared/replicated cache between your web servers, you are in a soup. So, when you consider using a cache, you may consider using a provider model, so that you can change the cache provider at a later stage if required. Another example is, making certain parts of your system plug-in based.

You may even go for a configuration file based provider model. From .NET 2.0 onwards, ASP.NET gives provider based extensibility for various functionalities like membership management, role management, etc. I've wrote an article a long time back about creating a simple custom provider framework[^]. You might also have a look at the Microsoft Provider Toolkit here[^].

You may also be interested to see how the Microsoft Enterprise Library uses the provider concept for providing configuration file driven features.

Deciding where you have the extension points during the initial stage itself is the key to ensure extensibility.

What services should I use, and where?

Various services like caching, logging, security (authentication and authorization), etc., needs to be used in multiple tiers. This decision is pretty critical, mainly to ensure that various non-functional requirements are met properly. As the stake holders might not be even aware about the overhead required to implement such services, it is pretty important to communicate the same to the stake holders during the initial phase itself.

The key is re-usability. Most of these services are common to projects. Hence, if you have an organization level framework which consists of these services, it is going to reduce the overhead in a big way.

What are the persistence mechanisms?

Persisting data and querying it back is a major consideration. Some people prefer using ORM (Object Relational Mapping) frameworks like Hibernate or NHibernate (the .NET port of Hibernate) - www.nhibernate.org[^] - others may follow the classic way of writing Stored Procedures and then using data tier classes to consume them.

Subsonic[^] is a .NET framework which provides an ActiveRecord kind of mechanism, much like the Ruby On Rails framework.

Another approach is to generate strongly typed classes based on database tables, using some code generation technique. Microsoft Web Service software factory provides a few functionalities like this. Also, there are other code generators or meta coding frameworks - I've used My Generation[^] earlier in a couple of projects.

Conclusion

We just discussed some driving forces, approaches, and considerations involved in architecting a new system. This is in no way a complete list - the whole objective is to help you analyze the thought process involved in architecting a solution.

As a next step, you may read my article on identifying entities and design problems in a system, to solve them by applying various Design Patterns. Click here to read that.

Visit my website http://amazedsaint.blogspot.com - for more articles, and .NET and Design Patterns recipes.

Also, here is a list of my other articles published in CodeProject.

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)