Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles / productivity / Office

Efficient Software Development

3.62/5 (12 votes)
30 Nov 2008Public Domain14 min read 46.4K  
Describes the theory of how software development can be simplified even for mission critical applications

What is Efficient Software Development?

In current days, a huge number of development tools proliferate, many of them “visual design” based. The most typical example is Visual Studio, the tool that Microsoft sells to make developers use its products and its development platform.

It is also common to require huge teams of developers and that applications are developed with countless tiers looking for a supposed “independence” between different aspects (such as presentation, business logic and backend), typically known as “n-tier” applications.

And, on top of all of that, it is widely recognized that software is complex by nature. Meaning that creation of the simplest piece may require more than a trivial line of code.

All of the above has originated a number of different problems, making Information Technologies to be the required Evil in most (if not all) organizations worldwide.

On the other hand, such issues come very well spiced with constant bombarding of software and systems integration companies that want to sell to organizations the idea of “software is so easy… if you do it with us”.

Another issue that has to be considered is that User’s requirements grow day by day and they become more and more sophisticated, due to the fact that current generations (Gen X & Gen Y) are every day more tech savvy, hence demanding such type of sophistication.

All of the factors above mentioned have caused majority of software projects to fail, no matter how expensive they are, how many resources are devoted to them and how many skilled persons are involved in them.

But, surprisingly enough, many popular applications are out there used by thousands (sometimes millions) of users, that resolve decently sophisticated users’ needs and are quite stable, and keep evolving as time goes by without that being an impediment for these being used very broadly.

One of the secrets behind that type of application, is simply that they are cleverly developed; encapsulating most of its complexity in smart code components, and avoiding unnecessary redundancy as well as unnecessary complexity layers which don’t do other things, but just further complicate matters.

So, what is Efficient Software Development? It is:

ESD: is a Paradigm of constructing software in the most clever way, taking advantage of the reusability concept, dynamic generation/management of information/functions and avoidance of any layer of really unnecessary complexity, thus allowing a simpler and faster maintenance/evolution; yet providing important sophistication levels based on its simplicity and cleverness.

Why Efficient Software Development?

Brook’s Law

Fred Brooks stated in 1975 a law that is very true even in the current century:

“Adding manpower to a late software project makes it later”.

This simply means that a project which is complex by nature, is more difficult to be caught up by new comers to it, making it even more complex to spread out the understanding of the same among those new members.

That’s so realistic today that Microsoft, among other companies, have the very best examples of that problem.

Software Complexity

Grady Booch (one of the co-authors of the Unified Modeling Language), in his book “Object Oriented Design”, describes in the very first chapter why the world is complex by nature (with all its events and its properties) and consequently the software has to be complex since it is a representation of reality. However, a point that Booch neglects to specify is that such complexity can be very well abstracted so that end users don’t have to deal with it. In other words, Booch makes reference to the human capability of abstraction just to explain it as a brain process that is modeled in the Object Oriented programming paradigm; but does not relate it to his first chapter of complexity.

In short: Despite reality is complex by nature, and Software is a model of the reality, Software can be simplified by building a foundation framework from which new models can be created; that hides most complexity and allows users of the higher layers to focus only on the semantics of the problem they intend to solve.

The Software Construction Cycle

In the past, minimum Software Cycles included:

  • Analysis
  • Design
  • Coding
  • Testing
  • Implementing

Nowadays, this cycle has become more and more complex, with additional stages that are required today due to more demanding sophistication. Steps like Use Cases writing (for refining Analysis of Requirements), Quality Assurance, Defect Management, Unit testing, Integration Testing and several others have been added (or expanded) to the major ones above mentioned.

But, do we really need all of those? This question should remind us about a published quote by a Microsoft Executive in its early days “We are the company with the largest technical support team”. A week later, Lotus Corporation, then a major competitor of Microsoft, published the following quote “In Lotus our Technical Team is small… thanks to the fact that our software is better and does not fail as that of companies with large technical support teams”; with clear reference to previous Microsoft’s quote.

Microsoft developed its so well known .NET Framework, which is a collection of patched components on top of its old technologies, rather than redoing it from scratch. As well as Microsoft, many other companies have monsters which it’s better not to awaken if we don’t want to deal with their fury.

Despite the nice advertising published in most developers’ magazines about Microsoft Visual Studio, the reality is that, for the simplest application, a lot of ridiculous steps need to happen, versus simply opening a text editor (with nice keyword coloring, of course) and enter a simple “PRINT ‘HELLO WORLD’”.

How Does It Work?

Framework

The first step toward efficient software development consists in creating a sufficiently robust, yet simply (cleverly) programmed framework.

A framework that will allow to dynamically generate any live component or object with which the user will interact. Modern technologies such as XML and the dramatic lowering of memory and hard disk space cost allows to put a lot of properties into objects that can be dynamically created on the fly.

Dynamic Typing

Purists of “abstract data type” concepts strongly reject the generation of dynamic objects, arguing that it completely defeats the purpose of defining a class. However, pragmatism has to be taken in consideration, as well as the ability to exploit existing memory and hard disk when it comes to dynamic generation. On top of that, a robust, intelligent management of data conversions is key when there is a need to put together a set of properties to be exposed to the user. This is one of the major arguments against dynamic typing and languages like JavaScript.

Yes, a very strong typing is needed in languages that require tons of code, as the simplest typing mismatch can cause catastrophic results. However, when a software component is created in a smart way, optimizing its code pieces and minimizing redundancy of code, dynamic typing lets programmers manipulate coding in faster and easier ways; and if it comes with smaller code pieces, the possible “mismatches” can be easily caught and fixed.

Ultimately, even languages with static early binding typing require mechanisms that allow combining information (such as conversion or cast functions); which is what dynamic typing languages (as JavaScript) end up doing. At the end of the day, every single data type is simply a set of bytes (or, better said, a set of data items) that have a special behavior and together mean something to the user. Not only that, very frequently, operations between data types is needed (like concatenating a string to numbers and datetime fields, or adding or subtracting hours, days, weeks, months between dates and types, and so on).

The Web Explorer and HTML as the Key

Most web explorers today are a huge application capable of recognizing most (if not all) user’s data needs. They handle video, they handle music, they handle pictures, they handle text (with different colors and sizes), geometric figures (yes, JavaScript allows to draw ovals and polygons without very complex coding, believe it or not), and so on.

The secret behind this paradigm is that the web explorer does not have any hard coded component, but rather has functions that dynamically recognize HTML tags and applies an existing behavior to them accordingly. Of course, the web browsers make usage of other applications in order to properly render information to the user, but all those other applications are also quite mature and self-contained (black boxes), capable of responding to the web explorers in appropriate ways.

At the time of this article being published (November 2008), Internet Explorer folder had less than 3MB of DLLs and applications (even considering it can use resources of the hosting operating system itself; but this fact just confirms the theory of a solid framework requirement). Mozilla has slightly above 30 (mainly due to the fact that it is self contained and most of its functionalities are operating system independent), and others fall into similar cases. So, why .NET requires so many objects and the development framework needs over 300MBs? The answer is very simple: it is unnecessarily overcomplicated because it is badly designed. Curiously, anything behind the web capabilities of .NET do not do other things but simply render main HTML and JavaScript code.

So, why is .NET required at all, while we can rather create smart JavaScript objects that take care of dynamically generating any HTML piece a web explorer can easily recognize and treat accordingly?

That’s why web explorers can today be an operating system on their own (and that’s why several products have been released emulating proprietary operating systems, called “webOS”). However, the philosophy behind web explorers is, again, no other but dynamically recognize HTML tags and behavior (JavaScript or Flash) code.

Ultimately, a web server is just an application that produces HTML code that is sent back to the client (Internet Explorer/Mozilla Firefox) for its local DLLs to interpret. That’s how it started (previously known as CGI or Computer Gateway Interface), how it continued (with .EXE files that generated the HTML code) until today (Java, JavaScript, Flash, ASP, ASP.NET, PHP and other platforms that end up generating HTML code).

That whole philosophy behind a web explorer can very well be used for a local application too (the reason why it was mentioned here is not necessarily related to the ubiquity of the Web, but rather to stress the paradigm behind the web explorers interpreting HTML code; which are simply applications that dynamically render objects).

It is not surprising why HTML, XML, CSS, JavaScript are not inventions of Microsoft.

Reusability

Another important concept of Object Oriented Programming that has been overseen in the latest development tools is the reusability, which is supposed to minimize the need of programming the same set of functions or properties multiple times without needing to repeat them. In other words, create a “black box” that allows forgetting about deeper details and lets programmers focus on new features.

Unfortunately, that is a fallacy since, in reality, tools like Visual Studio and other “powerful RADs” don’t take advantage of reusability appropriately, and instead insert tons of repeated and redundant code in the different pieces that “programmers” put in their software projects. There are certain levels of reusability, but the concept as it was originally conceived in OOP is not really taking place.

Of course, a lot of that problem has to do with programmers inability (that’s why the word was quoted in the previous paragraph) to think cleverly, and they are happy dragging and dropping objects from a tool bar, later spending endless hours programming the cosmetics of the application. In short, all those visual tools have became a mechanism to reproduce crap at higher rates than in the past; further complicating systems, then causing software projects to be more expensive and require larger development teams, larger project management teams and larger testing phases as well as bigger defect tracking systems. Not to mention the need of equipment with larger memory /speed capacities (it reminds me a funny comment of a Financial Director in one of the companies I have worked for in the past. When a colleague of mine asked him to authorize purchase of more memory, he asked “Is that going to improve the inventory system?”; from my colleague’s answer “no”, he then said “Then, how come more memory is needed? A byte is a byte, and it holds the same information it did years ago. Is it that bytes have grown in the last decades? Do you water them or what?”)

The Simplification Advantage

By creating solid, smart objects that can dynamically interpret predefined protocols based on a robust framework; programmers (and power users) can simply specify in a nicely designed front end the business parameters of the solutions they want to have an application for, without worrying about the details of the front end programming, the business logic and the back end.

The business logic may require special treatment and additional coding, and a more decent understanding of the technicalities behind it (i.e., a developer). But programmers should forget about creating multiple forms or reports that are in essence the same, only data in them changes in terms of contents, label text, user interaction (a calendar versus a check box versus a radio button, etc.) whereas their main difference is their business (semantic) meaning, and not how it is treated by a computer.

Two major candidates of a good framework are, without doubt, a CRUD system that can handle any CRUD entity (Create, Read, Update, Deactivate), and a Reporting system (not the “Crystal” or “Reporting Services” like, as the programming behind them is still unnecessarily complex for the most basic and common reporting needs).

Creating a framework that can easily process common “first”, “last”, “prior”, “next”, “edit”, “save”, “delete” (or rather ‘deactivate’), “drill down” tasks; or process like “parameter selection”, “report gathering”, “report layout”, “report totaling-subtotaling” will save programmers thousands of hours having them simply updating a simple interface where the meaning of data is defined, rather than “how” the computer presents it and stores it details.

A perfect example of this approach is the M language of Microsoft (which they are just starting to work with) as well as some other semantic languages (SDL) recently published that simplify declaration of objects and its features. One that I particularly like as it reflects the whole idea behind "semantic" (end user's language) definition of business is JSON (JavaScript Object Notation).

Ultimately, even the most sophisticated users only care about the systems they use rendering the data they are looking for and they manipulate it at different complexity levels (some users are pretty smart using the “Pivot table” functions in Excel, whereas others simply want a tabular list of data items and have no clue about what a “Pivot table” is). On several occasions, as a developer, I have been reported problems that came from basic syntax errors with additional commas, apostrophes, quotes or the like. When I try unsuccessfully to explain to the user that it was a simple mistake and fixing it was easy, they typically answer “I don’t care and I don’t understand what you are trying to explain. Only thing I care is this thing has to work”. Well, Efficient Software Development is about that.

  • It’s true reusability of objects.
  • It’s true application of the “black box” concept.
  • It’s truly taking advantage of current lowering of cost of memory and hard disks.
  • It’s true elegant/clever programming techniques.

As the concept gets spread out more and more and there is less visual programming and more clever programming (letting the “visual” being automatically rendered by the framework), software projects will get accomplished in shorter times, with smaller development teams, hence smaller defects and consequently smaller need of “defect tracking”, faster change maintainability (hence less complicated and less expensive change management protocols). When all that happens, software projects will get done in shorter times and at significantly reduced costs.

There are multiple success examples of this type of approach, even if the term hadn’t been previously formally coined by anybody.

Don’t let the big software companies scare you by pretending the software is more complex than what it is, just because their programmers are not clever.

License

This article, along with any associated source code and files, is licensed under A Public Domain dedication