Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles / All-Topics

Single Responsibility Principle on Different Levels of Abstraction

5.00/5 (7 votes)
14 May 2017CPOL7 min read 8.9K  
Single responsibility principle on different levels of abstraction

Somewhere in the early 2000s, Robert C. Martin, most commonly known as Uncle Bob, came up with the first five principles of Object Oriented Programming and Design – SOLID principles. SOLID is one astute acronym, in which each letter stands for a different principle:

  • S – Single Responsibility Principle
  • O – Open Close Principle
  • L – Liskov Principle of Substitution
  • I – Interface Segregation Principle
  • D – Dependency Inversion Principle

These principles are the backbone of Object Oriented Design and hold a key to crafting high quality and easily maintainable code. In the first of the five principles, Single Responsibility Principle, Uncle Bob combined ideas from two papers On the Criteria To Be Used in Decomposing Systems into Modules and On the role of scientific thought and gives a definition of SRP as it follows:

The Single Responsibility Principle (SRP) states that each software module should have one and only one reason to change.

Image 1

“Reason to change” in that definition pretty much means – responsibility, meaning that every software module or class should encapsulate single functionality that software provides. Now, this sounds pretty logical and straightforward, but over the years in the industry, I can see that this is hard to get right on point. This can be due to legacy code, or lack of motivation, or lack of domain knowledge or simply due to our natural tendency to join responsibilities. Separating responsibilities from one another, and finding a way for them to coexist, is much of what software design is really about.

Still, SRP is a concept that is so powerful and applicable that, if you look closely, it still permeates a lot of aspects of a software developer. So, I wanted to go more in-depth of this principle and its different variations, and give a little bit more insight into the various applications of this concept.

Object Oriented Programming

So how does that look in practice? Let’s take a look at the example:

C#
using (var sqlConnection = new SqlConnection(connectionString))
{
    sqlConnection.Open();

    try
    {
        using (var readCommand = new SqlCommand("select * from Entity", sqlConnection))
        {
            var reader = readCommand.ExecuteReader();

            while (reader.Read())
            {
                currenValue = reader.GetInt32(0);
                type = reader.GetInt32(0);
            }

            reader.Close();
        }

        using (var updateCommand = new SqlCommand(String.Format
        ("update Entity set Data = {0} 
        where Data = {1}", newValue, currenValue), sqlConnection))
        {
            updateCommand.ExecuteNonQuery();
        }

        Console.WriteLine("Data successfuly modified!");
        Console.ReadLine();
    }
    catch(Exception e)
    {
        Console.WriteLine("Failed to modify data");
    }
}

The logic of the code above can be broken down into three steps:

  • Opening connection to the database
  • Reading data from Entity table and caching the value of the first Entity
  • Writing new value into first Entity

Can we see all that from the code above without really getting into code? Well not really, since it is classic procedural code. Also, we see that all responsibilities are mashed in one function. Handling SQL connection, getting the data and modifying it, it is all part of one large chunk of code.
What if code looked like this:

C#
var entity = sqlDataHandler.ReadEntity();
sqlDataHandler.UpdateDataFieldInEntity(entity, modificationValue);

Console.WriteLine("Data successfuly modified!");
Console.ReadLine();

Well, it is more obvious what are we doing now, isn’t it? Of course, we moved whole complicated logic from our function to SqlDataHandler class, which looks like this:

C#
public class SqlDataHandler : IDisposable
{
    private string _connectionString;
    private SqlConnection _sqlConnection;

    public SqlDataHandler()
    {
        _connectionString = ConfigurationManager.AppSettings["connectionString"];
        _sqlConnection = new SqlConnection(_connectionString);
        _sqlConnection.Open();
    }

    public Entity ReadEntity()
    {
        var entity = new Entity();

        try
        {
            using (var readCommand = new SqlCommand("select * from Entity", _sqlConnection))
            {
                var reader = readCommand.ExecuteReader();
                while (reader.Read())
                {
                    entity.CurrentValue = reader.GetInt32(0);
                    entity.Type = (EntityType)reader.GetInt32(1);
                }
                reader.Close();
            }
        }
        catch(Exception e)
        {
            Console.WriteLine("Failed to read the data!");
        }

        return entity;
      }

   public void UpdateDataFieldInEntity(Entity entity, int newValue)
   {
        var toValue = entity.GetNewValueBasedOnType(newValue);

        try
        {
            using (var updateCommand = new SqlCommand(String.Format
            ("update Entity set Data = {0} 
            where Data = {1}", toValue, entity.CurrentValue), _sqlConnection))
            {
                updateCommand.ExecuteNonQuery();
            }
        }
        catch (Exception e)
        {
            Console.WriteLine("Failed to modify data");
        }
    }

    public void Dispose()
    {
        _sqlConnection.Close();
    }
}

So what we have done is, moved code that is responsible for manipulation with database into the new class, and left just the code that drives workflow in the original function. We separated responsibilities and hence made our code more readable, and maintainable. But also, we added more flexibility in our code.

Ok, so we can now better understand the principle that each class should cover one functionality. But, if you think about it, there is a certain similarity between SRP and Interface Segregation Principle(ISP). ISP states that no client should be forced to depend on methods it does not use. In practice, this means that class should not implement interfaces with methods which it doesn’t need. This leads us to break “fat” interfaces into smaller ones, grouping similar methods under the one interface. Doesn’t that sound quite close to the definition of SRP? One might say that ISP is SRP applied on abstractions, i.e., interfaces.

But what we go even further. Let’s define an object as a data with behavior. If we keep applying SRP on all our behaviors and separate them as much as we can, eventually we will create objects with a single function? Can we say then that is behavior with data? Did we just re-invent closures?

Functional Programming

Closures are somewhat the foundation of functional programming. They are basically a function that has an environment of its own. It can be executed at a later time, but it can access parts of the environment in which it was created. To demonstrate this, take a look at this piece of C# code:

C#
using System;

namespace ClosureExample
{
    class Program
    {
        static void Main(string[] args)
        {
            Action counterIncrementAction = CounterIncrementAction();
            counterIncrementAction();
            counterIncrementAction();
            Console.ReadLine();
        }

        static Action CounterIncrementAction()
        {
            int counter = 0;
            return delegate
            {
                counter++;
                Console.WriteLine("Counter value is {0}", counter);
            };
        }
    }
}

The output of the code is:

Counter value is 1 
Counter value is 2

As we can see that counterIncrementAction can still access a local variable counter.

Now, one might say that body of that CounterIncrementAction function looks awfully like a definition of an object. We could ask ourselves what is the real difference between objects and closures. To answer this, I am going to share a hacker koan by Anton van Straaten that I’ve found in almost every article and the book about functional programming:

The venerable master Qc Na was walking with his student, Anton. Hoping to prompt the master into a discussion, Anton said “Master, I have heard that objects are a very good thing – is this true?” Qc Na looked pityingly at his student and replied, “Foolish pupil – objects are merely a poor man’s closures.”

Chastised, Anton took his leave from his master and returned to his cell, intent on studying closures. He carefully read the entire “Lambda: The Ultimate…” series of papers and its cousins, and implemented a small Scheme interpreter with a closure-based object system. He learned much, and looked forward to informing his master of his progress.

On his next walk with Qc Na, Anton attempted to impress his master by saying “Master, I have diligently studied the matter, and now understand that objects are truly a poor man’s closures.” Qc Na responded by hitting Anton with his stick, saying “When will you learn? Closures are a poor man’s object.” At that moment, Anton became enlightened.

The meaning of this anecdote is that closures and objects are used for the same thing – to encapsulate data and behavior. They are just a different paradigm of the same thing.

What I tried to show here is how one pretty intuitive concept like SRP, took us down a new path of making software. Closures are pretty handy and pretty popular, especially in web development (JavaScript is pretty much based on closures, C# implements it with lambdas and anonymous functions).
But let’s take a different approach once again. So far, we tried to apply this on microcosmos, so to say. What instead of objects, we go one level up, and apply SRP on macro cosmos? What if we apply SRP on services? What if we slice up one big service in a lot of little services, that does one thing? Well, we get one of the biggest current trends – microservices.

Microservices

If you went to any talk or workshop about microservices, there is a big chance that you saw a picture that looks something like this:

Image 2

The idea behind it is that instead of putting all business logic in one big service – a monolith, we split up that logic into separate smaller services. These smaller services are autonomous entities that focus on one part of the business functionality. They pretty much take the same approach that we encountered with objects. They focus on gathering same functionalities under one roof, and by doing that, they increase the flexibility of our solution. We are now able to deploy these pieces of business logic separately or do them in different technologies, applying the best technology to that part of the problem. Of course, systems like these are more resilient, since it is easier applying bulkhead pattern on them.

So, we can see again that using SRP on a different level of abstraction we ended up with something completely new. We ended up, once again with a new and fresh approach to solve our problems.

Conclusion

In some moments, I might sound like that alien meme guy, who is trying to push that every breakthrough in computer science is done using SRP.

Aliens SRP

That certainly is not the case. But there is something powerful about the concept of separating the responsibilities and making one thing have one functionality, and its influence on different areas of our jobs. Take a look, and you will probably find this concept in some other places that I haven’t covered. Actually, I am sure you will.

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)