Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles
(untagged)

A Look At What's Wrong With Objects

0.00/5 (No votes)
6 Aug 2003 3  
A look at what is wrong with OOD/OOP based on CPian responses to the question What is wrong with objects?

Table Of Contents

Introduction

When I asked "what's wrong with objects?" I received numerous responses, all of which were excellent and spanned the spectrum of software design from hard core implementation issues to philosophical issues. What surprised me the most was that from the original 17 responses, they almost all pointed out different problems. It seemed to me that here was an excellent collection of knowledge that was ripe to be compiled into a single repository. So that's what this article is all about. A "thank you" to everyone that answered my question. I hope you find it entertaining as well as informative. Anything wrong or just plain stupid in this article is entirely my fault, not yours. If you disagree with something, let me know. Heck, there's a few things I disagree with myself, but if you don't take a position, how will you learn?

People Problems

Several CPians pointed out that object oriented programming (OOP) fails because of inadequate people skills.

Training

Training is a big factor. The learning curve might be expressed by the following graph. This is a fairly generic representation of a learning curve--for instance, it doesn't go into specialization, such as SQL, client-server, web applications, etc. I've put some effort into putting together what I think is a reasonable way of learning programming. For example, if I were to put together a curriculum on programming, it would probably look like this.

This above graph is only half the picture. Here's the other half (print them out and paste them together!):

The point being that once you are competent at the fundamentals, you can begin looking at design issues, reworking your code, and other techniques that strengthen your abilities as a programmer/designer. This learning curve is quite difficult because learning OOP is intimately involved with learning frameworks such as MFC, tools such as debuggers, and language enhancements (for lack of a better word) such as STL. In fact, learning OOP requires learning a lot of non-OOP concepts first!

For the sake of completeness, I'm going to try some one-liners describing each of these stages and expanding upon the Guru's Triangle.

Programming Fundamentals This is pure architecture and the mechanics of Von Neumann machines
Programming Tools Compilers, debuggers, source Control, defect trackers, profilers, etc.
Language Fundamentals Programming fundamentals applied to a specific language syntax
Procedural Languages Writing good procedural code
API's and SDK's Learning the foundations of the platform
Encapsulation Reviewing procedural code from the perspective of encapsulating form and function
Modifiers The art of hiding internal implementations and exposing public interfaces
Polymorphism Taking advantage of strong typed languages
Inheritance-Specialization How to specialize the functions of class
Inheritance-Abstraction Is thinking abstractly something that can be taught?
Inheritance-single vs. multiple Single vs. Multiple inheritance issues
Interfaces An interface is an order to the programmer to implement functionality. It is not inheritance
Application Of Objects Thinking abstractly--Is it a baseball object or a ball with the property values of a baseball?
Application Of Objects "is a kind of" vs. "has a" issues
Templates/Generics/Meta-data Yet another layer of abstraction
STL A can of worms--very useful, but easily abused and misapplied
Frameworks MFC, .NET (there are no other planets with intelligent life on them)
Refactoring, Level I Intra-method refactoring--Fixing expressions, loops, conditionals, etc.
Refactoring, Level II Intra-object refactoring--Fixing internal object implementation problems
Design Patterns, Level I Creational patterns--These are the easiest to get one's arms around
Refactoring, Level III Inter-object refactoring--Fixing hierarchies
Design Patterns, Level II Structural patterns--decoupling object dependencies
Refactoring, Level IV Design pattern refactoring--hope you never have to do this
Design Patterns, Level III Behavioral patterns--Learning to identify behaviors in your code and decoupling the generic behavior from the specific implementation

Ad Hoc Methodologies

The posts I've made in the past regarding methods such as eXtreme Programming have met with a variety of responses, mostly negative. I find it disturbing (yet unsurprising) that it seems that every generation has to re-learn the mistakes of the past. I'm not particularly convinced that we're moving in a forward direction when it comes to things like improving quality and customer relationships, not to mention reducing bugs, meeting deadlines, and accurately estimating projects. It seems more like we're moving sideways. TQM, ISO 9001, CMM, Agile methods, and a variety of others all seem to address the same problem from a different angle. Every five years or so, there's a new fad in "how to write programs successfully".

But what's really disturbing is that, from my dialogs with other programmers, it seems very few of them actually know anything about any of these methodologies. It appears that there's quite a bit of ad hoc software development going on out there. I think it's time to stop moving sideways and move forward instead. Books on refactoring and design patterns are a step in the right direction, in my opinion, because they provide concrete (or mostly concrete) tools for programmers to use that will actually make them better programmers. But the concepts are also hard to learn. Which is why I'm so keen on things like automation and developing a framework that enforces good programming practices. Easier said than done, and there's trade-offs too--the programmer loses a degree of creativity and control when the framework forces him/her to interact with other code in a very specific way. And worse, who's to say that what I'm doing is really a forward solution? It may be sideways, or even, heaven forbid, a step backwards!

Formal Education

What happened to formal education? People get degrees nowadays to get jobs and better pay rather than actually learning something. Learning seems to have taken second seat in the orchestra of life. But there's some justification to this also. My experiences with institutes of higher learning here in the United States (and in particular, California), has been terrible. Antiquated machinery, obsolete programming courses using out of date operating systems and tools, and professors that grade on "following directions" rather than solutions that try to be efficient and high performance. And the biggest complaint of all--colleges and universities haven't a clue what real-life programming is like. Sadly, formal education seems to be a necessary evil at this point, at least in the computer science field. It used to be that universities were the leaders in innovation and research. Now they seem more a haven for the socially inept and "education" leaves one technically incompetent.

Concrete Technology Problems

Technology Confusion

Many people pointed out that there's a lot of confusion with regards to using OOP. Object oriented design (OOD) only provides guidelines. There's a lot about OOD that is fuzzy and learned only through painful experience.

More Than A Container?

During the course of computer architecture development, data and instruction have become separated physically (in address space), providing the means to create multithreaded, multi-tasking systems that are protected from stepping on each other. Objects serve to unify the divergence of data and instruction. Yes, they're still physically separated and confined to the application and it's data, but the point is that an object binds together data and operations into a logical structure. So, is that all an object is? Yes, with regards to an object itself, that's it. It's when objects are used in relation to other objects that things get more interesting, and more complicated.

Now, the diagram above is a bit silly, but I think it's also illustrative of the evolution in both hardware and software architectures. The last column, "Aspect Computing", is something I have totally invented, but it seems to me that as distributed services develop and components/component communication is standardized, a significant amount of "programming" will simply be gluing together disparate components, handling events, and coordinating the flow of data between them.

To Inherit Or To Encapsulate, That Is The Question

The first thing I learned when designing an object structure was to ask myself, is object A "a kind of" object B, or does it "have" an object B. Unfortunately, this is just the tip of the iceberg. Many relationships are not clearly defined and are sometimes easily seen as both. For example, in GUI design, if I want a customized control, should the custom control be "a kind of" the generic control, or should it "have a" generic control? There are pros and cons to both.

If the object is derived:

then the user (another programmer) most likely (unless the base class is declared as private in the derived class) has complete access to the base class methods and the custom control needs to ensure that it handles all the functionality of the base class.

Listing 1: Inheritance
class BaseClass
{
public:
virtual ~BaseClass() {}

void Foo(void) {}
};

class DerivedClass : public BaseClass
{
public:
void Bar(void) {}
};

// usage:

DerivedClass* dc=new DerivedClass();
dc->Foo();
dc->Bar();

If the control is contained:

then access to the generic control can be strictly controlled (no pun intended).

Listing 2: Encapsulation
class InnerClass
{
public:
// note the missing virtual destructor
void Foo(void) {}
};

class OuterClass
{
private:
// private restricts access by derived classes
InnerClass ic;

public:
void Bar(void) {}
};

To make matters worse, depending on how the base control was implemented, the programmer may not have a choice. Non-virtual destructors, methods that aren't declared as virtual, and private members often make it impossible to specialize an object through derivation.

Another concern is the trade-off between growing a rigid inheritance structure vs. growing a highly dependent associative structure. In both cases, there is a strong coupling between objects. However, with an inheritance structure, the base classes can easily break the application by changing implementation.

This problem is slightly mitigated with encapsulation because encapsulation allows you to implement wrappers around other objects, whereas inheritance does not.

Wrappers are an important way of protecting yourself from changes made in other objects. Depending on the implementation of the base class (for example, .NET's Form class, it may be necessary to implement a derived class to gain access to protected methods and encapsulate the specialized class.

Inheritance also locks you into a structure that may become inadequate over time. It may become desirable to specialize the functions of two or more objects (multiple inheritance) or to change the behavior of the object by deriving it from something else. Especially with languages that do not support multiple inheritance (like C#) or don't support the issues around duplicate methods implemented by both classes (such as in C++), encapsulation may be a better alternative.

As you can see, the decision to inherit vs. encapsulate is more than just a simple relationship decision. It involves questions of access, support, design, extensibility, and flexibility.

The Gene Pool -- Single Inheritance vs. Multiple Inheritance

Consider three different kinds of balls:

  • golf ball
  • basketball
  • tennis ball

which are neatly represented by the following classes, all derived from Ball:

Listing 3: What appears to be a reasonable class structure
class Ball
{
public:

Ball(void) {}
virtual ~Ball() {}

virtual void WhoAmI(void) {printf("Ball\r\n");}
virtual void Bounce(void) {printf("Ball\r\n");} 

double diameter;
}

class GolfBall : Ball {}
class BasketBall : Ball {}
class TennisBall : Ball {}

Each of the specialized versions implements some additional properties specific to it. For example:

Listing 4: The GolfBall derivation includes some specialization
class GolfBall : Ball
{
public: 

GolfBall(void) : public Ball() {}
virtual ~GolfBall() {}

virtual void WhoAmI(void) {printf("GolfBall\r\n");}
virtual void Bounce(void) {printf("GolfBall\r\n");}

DimplePattern p;
Compression c;
}

(from http://news.bbc.co.uk/sportacademy/bsp/hi/golf/equipment/other_gear/html/ball.stm and http://www.golftoday.co.uk/clubhouse/library/qandaballs.html)

Listing 5: The BasketBall derivation includes some specialization
class BasketBall : public Ball
{
public:

BasketBall(void) : Ball() {}
virtual ~BasketBall() {}

virtual void WhoAmI(void) {printf("BasketBall\r\n");}
virtual void Bounce(void) {printf("BasketBall\r\n");}

Category cat;
ChannelSize csize;
Color color;
}

(from https://www.sportime.com/products/smartbasketballs.jsp)

Listing 6: The TennisBall derivation includes some specialization
class TennisBall : public Ball
{
public:

TennisBall(void) : Ball() {}
virtual ~TennisBall() {}

virtual void WhoAmI(void) {printf("TennisBall\r\n");}
virtual void Bounce(void) {printf("TennisBall\r\n");} 

Speed speed;
Felt feltType;
Bounce bounce;
}

(from http://tennis.about.com/library/blfaq22.htm)

Now let's say that we want a new ball, a GobaskisBall ball, that combines the compression of a golf ball, the color of a basket ball, and the felt type of a tennis ball. From a novice point of view, using the idea that we're creating a specialized ball with characteristics of three other balls, the Gobaskis ball might be constructed as this:

Listing 7: Deriving a new ball with features of all of the other three
class GobaskisBall : public GolfBall, public BasketBall, public TennisBall
{
public:
GobaskiBall(void) : GolfBall(), BasketBall(), TennisBall() {}
virtual ~GobaskiBall() {}

virtual void WhoAmI(void) {printf("GobaskiBall\r\n");}
virtual void Bounce(void) {printf("GobaskiBall\r\n");}
}

which is the wrong application of inheritance because it creates the "diamond of death":

(I'll add some explanation of this shortly. For now, let's just say I did have something written, but I made a really really stupid mistake and it was best to erase it before the rest of you all noticed!)

Is It An Object Or A Property?

One of the things that's really confusing when creating an object model is determining what is an object and what is a property. There is contention between how the real world models things and how they are best modeled in an OOP language. Using the example above, also consider that the inheritance model is inefficient and will probably result in a lot of dependant code being recompiled when additional properties are added.

In the above example, to avoid multiple inheritance issues, the programmer will most likely put all the properties of interest into a new class derived from Ball.

Listing 9: Putting the properties in a new class derived from Ball
class GobaskisBall : public Ball
{
Compression c;
Color color;
Felt feltType; 
}

But the astute programmer will realize that these properties are pretty generic. Not only can they be put together in a variety of combinations, but the "requirements" for the ball might change as the application is developed. This leads to the realization that having specialized ball classes is probably the wrong way to go. What is needed instead is a single ball class that has a collection of ball properties. A "ball factory" method can then construct whatever specific ball instance is needed, both in terms of the properties that describe the ball and the value of those properties:

Listing 10: Abstracting the concept of the ball and creating using collections instead
class Ball
{
Collection ballProperties;
}

We have now eliminated the specialization and created something called a "collection". (There's more going on here too, which is described in the section "Death By Abstraction" below).

As I have hopefully illustrated with the examples above, the concept of inheritance doesn't work all the time, and using abstraction and collections creates design time decisions as to how to implement an object model--is the object actually a first class object or is the concept of the object better expressed as properties? This is not an easy answer, and there is no right answer.

An Interface Is Not The Same As Inheritance

A base class can contain fields and default implementations. An interface contains neither, requiring the derived class to provide all the implementation. For example, using C#, all the methods and properties of IBall have to be implemented.

Listing 11: Interfaces require implementation
interface IBall
{
void Bounce();
double Diameter {get; set;}
}

public class TennisBall : IBall
{
private double diameter;

public double Diameter
{
get
{
return diameter;
}

set
{
diameter=value;
}
}

public void Bounce() {}
}

As illustrated in this example, making an interface class out of the Ball has its problems. For example, a default function behavior for Bounce cannot be implemented. Even worse, each class has to implement its own Diameter property and access methods. Microsoft's solution to this is to automate the IDE so that you can press the Tab key and it will generate all the interface stubs. Woohoo. This does nothing for language constraints and/or bad object models. (Yes, I grudgingly will admit that it does force you into some good practices, such as wrapping fields with get/set property access methods.) At this point, we have the worst of both worlds, in my opinion--a language that doesn't support multiple inheritance, and an interface class construct that requires lots of duplications in code. So what's the alternative?

First, interface classes should not be viewed as a solution for multiple inheritance. Instead, they should be viewed as an implementation requirement. This may be obvious to the experienced programmer, but it isn't to the junior programmer. In fact, I clearly remember the Javaheads I used to work with saying that interfaces were a replacement for multiple inheritance. This is not true, in my opinion. The phrase "class Foo implements the interface Biz" clearly shows what interfaces are useful for--implementation of functionality. For this reason, interfaces should also be very small--they should only specify the functionality required to implement a very "vertical" requirement (OK, that's Marc's recommendation).

The problem still remains though of how to specify default behavior and how to employ the ease of specifying a group of parameters that all similar (as in derived) classes can include in their own specialization. Obviously, this can be done by using a class instead of an interface, but this may not be the desirable approach. Something has to give in a single-inheritance language. Conversely, maybe the question should be: "is this idea of having default behaviors and a collection of default fields such a good idea?" And the answer to this is probably "no, it's actually not a good idea". But to understand "why not" isn't easily conveyed when someone is just learning about OOP. Read on...

Are Objects Sufficiently Decomposed?

One day, a young fellow was walking home from the university where he was studying for a degree in music. As was his custom, he always walked by a graveyard. On this particular day, he heard some strange music coming from the graveyard. It seemed oddly familiar, but he just couldn't place it. This happened again the next day. On the third day, he brought his professor along to help him identify the music. The professor exclaimed, "Why, that's Beethoven decomposing!"

First off, in the above example, the IBall interface is too complicated--it hasn't been sufficiently decomposed. What should really exist is:

Listing 12: Decomposing IBall
interface IBounceAction
{
void Bounce();
}

and:

interface IBallMetrics
{
double Diameter {get; set;};
}

This separates the shape of the ball from an action, such as bouncing. This design change improves the extensibility of the application when, for example, a football is required, which is not a spherical shape.

Secondly, and this is perhaps the harder case to argue, the concept of "diameter" should be abstracted. Why? Because of unforeseeable dependencies on the meaning of this value. For example, it may be calculated from the radius. The circumference or volume of the ball may be calculated from the diameter or the radius. Or given the volume, the diameter can be calculated. The application becomes more flexible by making the concept of "diameter" more flexible. Hence, diameter shouldn't just be an integral type, it should perhaps be an object in its own right:

Listing 13: Making "diameter" a first class citizen
public class RoundBallMetrics
{
private double d;

public double Diameter {get {return d;} set {d=value;}}
public double Radius {get {return d/2;} set {d=value*2;}}
public double Circumference {get {return d*Math.PI;} set {d=value/Math.PI;}}
public double Volume 
{
get {return (4.0/3.0)*Math.PI*Math.Pow(d/2, 3);}
set {d=2*Math.Pow((3.0*value)/(4.0*Math.PI), 1.0/3.0);}
}
public double SurfaceArea
{
get {return 4.0*Math.PI*(d/2)*(d/2);}
set {d=2*Math.Sqrt(value/(4.0*Math.PI));}
}
}

(Note, I haven't tested these equations for accuracy!)

To digress for a very long moment, we now have a nice class that could use an interface (yes, an interface!) so that anyone implementing the metrics of a ball has to implement these functions. But we have to be careful, because some balls aren't round, having a major axis and a minor axis like a football, which means that the diameter, radius, and circumference are different depending on the axis. By using an interface, we can specify that certain functions need to be implemented while leaving that implementation to the needs of the application. Solving only the round ball problem, the interfaces might look like this:

Listing 14: Some useful interfaces for ball metrics
interface I3DMetrics
{
double Volume {get; set;}
double SurfaceArea {get; set;}
}

interface I2DMetrics
{
double Radius {get; set;}
double Diameter {get; set;}
double Circumference {get; set;}
}

// ooh, look. Sort-of multiple inheritance!
interface IBallMetrics : I2DMetrics, I3DMetrics
{
// adds nothing
}

public class RoundBallMetrics : IBallMetrics
{...}

Furthermore, this class can now also implement streaming, integration with the Forms designer, and other useful features (most of which make the class entangled with other objects though--tradeoffs, it's always about tradeoffs).

Back to the point. We now have a nice class for dealing with the diameter of a round ball. The interface IBall now needs to only implement a getter function which returns a first class ball metrics citizen. This allows for different ball shapes and sizes:

Listing 15: A possible ball implementation
interface IBall
{
IBallMetrics {get;}
}

public class RoundBall : IBall, IBounceAction
{
private RoundBallMetrics rbm;

public IBallMetrics {get {return rbm;}}
public void Bounce() {}
}

Now that we've made a ball with an abstraction to handle its metrics, the problem of default data and default implementation with regards to interfaces has resolved itself--there is no such thing! Ever! If you want default properties or implementation, then you have to implement that behavior in a base class and specialize from that class, which leads to all sorts of problems in both single inheritance and multiple inheritance languages. So the rule of thumb is, don't implement default properties and methods.

In conclusion of this section, I can only say that I hope I have demonstrated adequately the technical challenges involved in the issues of managing properties, objects, interfaces, inheritance, decomposition, and collection. Every application is a unique challenge in this regard.

Relationship Counseling

Relationships between different families is just as important as the relationships within your family. Sad but true, getting involved with other families often means that you usually get ensnared in their politics and are often asked to "take sides", being an unbiased observer (or so they think!)

The "Only Child" Syndrome

In object models, we have parents and children, but we don't have brothers and sisters. Because object models are hierarchies, they are an incomplete model of relationships. This gets the programmer in all sorts of trouble when relationships between families must be created. Consider the following hierarchies and their relationships to each other:

Object models do a great job of representing hierarchies, but they don't address the problem of relationships between objects within the syntax of the C# or C++ language. For that, you have to look at things like Design Patterns. A useful design pattern to talk to your siblings is the telephone. Call them up (they probably won't answer anyways) and leave a message:

Having Affairs Leads To Entanglement, Divorce, And Alimony

Messaging is an excellent way of avoiding entanglement, but it also requires an infrastructure in which synchronization, workflow, and timeouts have to be handled (and it makes itself amenable to worker threads as well). For example, when the login process sends a message to the database to lookup the user, it must wait for a response or timeout if no response is received with a certain amount of time. Other design patterns also decouple, or disentangle, objects from each other while working within the same thread, thus eliminating the need to handle synchronization and other related issues.

Entanglement results in the death of a design. When I look at an entangled application, I usually tell my client that it's easier to rewrite it than to refactor it. Entangled object relationships must be refactored early in order to rescue the project. The objects must be divorced from each other, and this usually involves a lot of expense on the part of both parties--lawyer fees, court room battles, etc. The best thing is not to get entangled in the first place. Using a messaging system, for example, maintains an acquaintance relationship rather than creating an intimate love affair. Yes I know, it's boring.

Philosophical Problems

Besides people and technical problems, there's a lot of philosophical problems with OOP, mostly related to the way the real world works.

Abstraction And The Loss Of Information

Abstraction is generalization, and when generalization occurs, information is not just moved, it's actually lost. One of the hazards of abstraction is that information that would normally be contained in the structure is lost as the structure is abstracted. Instead, all the information is in the data. While this makes the program more flexible (because it can handle different kinds of data), it also makes the objects more obtuse--how should they really be used?

Representational Abstraction

The ultimate representational abstraction in C# (and yes, in Java) is the object. In C++, it's the void pointer. An object can be anything, and since saying:

object o;

is not informative at all, the concept of "reflection" is necessary so that o can find out what it is:

Listing 16: Obtaining object type information
object o=new RoundBallDiameter(); 
Type t=o.GetType();
Console.WriteLine(t.ToString());

Answer: interfaceExample.RoundBallDiameter

Model Abstraction

Hierarchies create abstraction, but here the information loss isn't as critical because it's not so much the concept that's abstracted, but the container for the concept. The abstracted container still retains the information regarding what can be done with the object. For example, in this hiearchy:

the Window class is fairly abstract, but it can still contain methods to manipulate for:

  • position
  • text
  • selection events
  • background color
  • border style
  • etc.

and thus the contextual information is preserved even if the container itself is abstracted.

Concept Abstraction

Conceptual abstraction is where the concept that you are implementing is abstracted. Instead of the abstraction being represented in a hierarchy, the abstraction is represented in a single container using collections to manage the properties and methods of the concept. This kind of an object can represent anything, including other objects. From the mundane, such as a Matrix class:

Listing 17: A Matrix class
public class Matrix
{
private ArrayList columnList;
private int cols;
private int rows;

public ColumnList this[int col]
{
get
{
return (ColumnList)columnList[col];
}
}
...
}

public class ColumnList
{
private ArrayList rows;

public object this[int row]
{
get
{
return ((Cell)rows[row]).Val;
}
set
{
((Cell)rows[row]).Val=value;
}
}
...
}

which can be used to manage any two dimensional data (similar to a RecordSet) to the absurd (but still sometimes necessary):

Listing 18: Abstracting the concept of an object
public class AbstractObject
{
public string name;
public Collection methods;
public Collection properties;
public Collection events;
}

Conceptual abstraction loses all the information about the concept itself. While powerful, it's often inefficient. But sometimes it's inescapable. When I was working on a project to automate satellite designs, I was forced to implement a highly abstract model--the Category-Attribute-Unit-Type-Value (CAUTV) model in order to fully capture the desired concepts:

In this model (actually, this is only a piece of a much more complex model involving assemblies, automatic unit conversions, components, etc.), a piece of equipment belongs to a category, such as a travelling wave tube amplifier (TWTA). The TWTA has attributes, such as mass, power, thermal loss, amplification, and noise. Each of these attributes is measured in a particular default unit, such as kilograms, watts, milliwatts, etc. Each unit might have one or more types--minimum, maximum, nominal, average, and so forth. A particular instance of the equipment is then composed of a value which combines the unit, type, and attribute. Because this information could not be quantified by the engineers (and in fact was different depending on equipment manufacturer), a very abstract model had to be constructed to support the requirements.

As should be fairly evident, this abstraction has resulted in the complete loss of contextual information. A model like this could be just as easily be used to describe satellite equipment as it could to describe a fantasy role-playing character. It also becomes much harder to convey the application of the model. This kind of abstraction requires "explanation by example" in order to understand the context in which the model lives. Less abstract models communicate their intent through "explanation by definition".

Death By Abstraction

It is entirely possible to over-abstract. My rule of thumb is to consider a concrete concept, then consider the first two levels of abstraction. Usually only one level of abstraction suffices and is useful. Going beyond the first abstraction can lead to a lot of confusion regarding the appropriate application of an abstracted object. At its worse, I sometimes find myself abstracting an abstraction, only to realize that I haven't changed anything but the names of the classes!

There Is No Crystal Ball

Programming is a lot like predicting the future. I think there's two areas of prediction--the functional area and the implementation. Coming up with a feature set for an application is a prediction on what will be considered useful in the future. A statistic I came across once is that 90% of the people only use 10% of the features of a word processor, but that 10% is different depending on who you ask. So predicting the customer's needs, especially as the rest of the world is rapidly changing, is difficult. But the crystal ball has another purpose too. It has to tell the programmer how the code will be used in the future too. So in reality, the programmer has to make educated guesses in two areas--what's important to the customer, and what's important to his/her peers. This second area I'll elaborate on a bit.

The Undiscovered Country

Writing code, whether the intention is to make it re-usable or not, requires predicting the future. There are many considerations when predicting how the code will be used, including design issues, implementation issues, and communication issues. Some of the things that might be considered when writing code:

  • Are the inputs validated?
  • Are the outputs validated?
  • Can I rely on the components to which I'm interfacing?
  • Have I considered all of the ways this code might be used?
  • Is the design and the implementation flexible enough to handle requirement changes?
  • Is it documented sufficiently so that I or someone else can figure out what I did?
  • Will the code survive the next leap in technology, both hardware and software?

A lot of these answers require making tradeoffs. Sometimes, the wrong tradeoff is made, but only in hindsight. What COBOL programmer would have thought that their code would still be in use at the turn of the century, when saving two bytes on a year record was added up to a lot of saved disk space? How many old PCs (even by today's standards) will still be operational when the year 2038 comes around, and the hardware clocks all reset to 1970? Who could have predicted when I wrote the remote video surveillance software in DOS in 1987 that the Japanese police department would be our biggest customer and that they wanted to program in Japanese? Unicode and 16 bit characters weren't even around, and Windows 3.1 wasn't an environment for high performance graphic applications! And how much of this code was re-usable when processor and graphic performance got good enough to migrate the code to Windows?

The Consumer

Predicting changes in technology and the lifetime of the application is one problem, but the other problem is predicting the needs of one's peers. Programmers are producers--they make things. Other programmers are consumers--they use things that other programmers have written. As a consultant, I'm like the snake eating it's tail--I consume a lot of what I produce. Predicting what I'm going to need in the future and writing the code well enough so that what I write is useful to myself later on is hard enough! Imagine trying to do this for someone else. But that's what we constantly try to do. In part, the psychology behind this is that we'd like something of ourselves to live on after the project is completed (a sort of mini-death). It's also a challenge--can I write something that someone else will find useful enough to use in the future? And it's ego stroking--something I did is useful by someone else.

All of these factors cloud the fact that the consumer (the next programmer) wants her own legacy, her own challenge, her own ego stroked. So what happens? Old code is criticized, condemned, and complained about (the Dale Carnegie three C's), followed by enthusiastic "I can do it better" statements. Maybe it's true, and maybe it's not. But the fact remains that programmers, being an anti-social lot, are actually fierce competitors when it comes to implementation re-use.

Reality Bites

Besides the psychology of re-use, there's a lot of good reasons for why re-use fails. If you're wondering why I'm talking about re-use, it's because that's one of the "side-effects" that object technologies are supposed to help with--objects are re-usable, cutting down on development costs and debugging costs.

The Differences Are Too Great

There are really two layers of code--application specific and application generic. Re-use is going to happen only in the application generic area. When application code is highly entangled and/or has poor abstraction, the amount of re-use is going to be minimal because the objects are bound too tightly to the application-specific domain. This of course is not the fault of objects, but in and of themselves, they do nothing to prevent this. However, objects can be used to disentangle code, such as implementing design patterns, creating abstractions, and wrapping framework implementations such as MFC and .NET with proxies and facades. The approach that I have found that produces a high level of re-use involves componentizing the various objects and using a framework in which I write simple scripts to orchestrate the flow of information between all the components. This is the "Aspect Computing" concept in the diagram at the beginning of the article.

Concepts Do Not Translate To Implementation

Concepts do not translate well into implementation because the concept does not address the low level of detail that is required during implementation. For this reason, it's impossible to predict how much time an application will take to develop. Certainly it's possible to produce a highly detailed design, but another thing that I've found over and over again is that implementation often reveals problems with the design. The design might look great on paper, but implementation shows problems. This extends beyond the design to the concept itself. On numerous occasions, I have discovered flaws in the concept because during implementation, I better understand the problem that's really trying to be solved. It can be argued that this is a result of inadequate communication, poor training, poor documentation, not enough time invested in explaining the concept, etc. This may be true, but I will counter-argue that when implementation begins, no matter how well the concept is defined and the design supports that concept, problems will occur.

Data Entanglement

Just as objects can be entangled with each other, entanglement occurs between the objects and the data that drives the objects. For example, in the AAL, there's a deep entanglement regarding the XML schemas (XSD) and functionality of the components. Other dependencies exist--as CPian Brit pointed out, a dialog is dependant upon the dialog layout, the string table, bitmaps, and resource IDs. All of this results in the object being non-portable. Again, this is not the fault of objects. However, it deeply affects issues revolving around objects.

Project Constraints

This should be a no-brainer, so I'm just putting it in here for completeness. Sometimes re-use, disentanglement, abstraction, design patterns, refactoring, etc., all succumb to the constraints of a project, usually time and money. The tug on a project is represented by the above triangle of features, resources (time, money, people), and quality.

The Atlas Design

According to most world concepts, there's someone holding up the earth in the heavens. For the Greeks, that person is Atlas. Atlas can be seen as the framework in which the application design lives. Many applications don't have an encompassing framework, or they rely entirely on existing frameworks, such as MFC, COM, CORBA, ActiveX, and/or dot-NET, plus a myriad of others. In my experiences, the concept of the "Atlas Design"--a meta-design on which the application is built, is rarely considered. The expertise, vision, or funding may be lacking. In one company the vision and funding was there, but the expertise wasn't, which resulted in a truly crippling framework which the programmers found all sorts of ways to circumvent because it was so difficult and annoying to use. In my other articles, I have written about the Application Automation Layer. The AAL is an "Atlas Design"--it is a framework on which all other frameworks can live. Essentially, it creates components out of those frameworks, and often creates components out of subsets of those frameworks.

Conclusion

OK, there isn't anything actually wrong with objects. They solve a particular problem, and they're pretty good at it. But perhaps we're expecting more solutions than objects have to offer. Or perhaps the field of software development is still so new that nobody really knows how to do it. It's pretty clear that the problem doesn't lie with objects, but with people applying them and the tasks they are asked to perform. If I want to win the Indy500, I can't do it with a VW Beetle. It's time to look at who's holding up the world.

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

A list of licenses authors might use can be found here