One interesting observation about computer programs is that something always happens during object access behind the scenes which means that any object access inevitably entails execution of some code which is not part of this program. Thus object access is necessarily indirect because it needs some intermediate procedure which is responsible for finding the real location of the represented object.
For example, if we have a reference to a bank account object of class Account
then it can be accessed as usual by calling its methods:
Account account = findAccount("Alexandr Savinov");
double account.getBalance();
Here one method invocation is responsible for accessing this object and getting the current balance. Although here we have the illusion of instantiations action and direct access, it is wrong to think that getting balance is a kind of primitive operation which is executed immediately after this method has been called. In fact, if it is a Java program then JVM will have to resolve this reference into memory handle which in turn will have to be locked and a physical address will be obtained. If it is a remote reference then this request will have to be serialized and transferred over the network using some protocol. It might also happen that this reference represents a persistent object and then its state will have to be retrieved from some storage. There are also many other examples and use cases but the main observation is that
There is always some intermediate code executed implicitly behind the scenes during object access and this code is referred to as dark matter of the program
Why it is called dark matter of the program universe? Because it is undetectable and we know about its presence only indirectly by observing its effects. The presence of dark matter is inferred from the necessity to explain some "missing functionality" where we see that the program results in some effect but the program code has no instructions which could explain it. Moreover, even if we know that it does really exist we are still not able to affect this layer of functionality using traditional approaches programming. We cannot set the moment in time where this code will be activated. And we are not able to influence what precisely will happen during access. This layer is completely out of control. And here we come to an interesting but rather pessimistic conclusion:
There exists a layer of functionality which cannot be changed or replaced in OOP
It should be noted that of course we have a possibility to influence how object access works by changing the environment of the program such as the compiler, run-time environment, library, operating system or hardware platform. This will not change the explicitly defined behaviour of the program (business logic) but it will change how references are defined, how objects are accessed and other aspects that cannot be changed from the program itself. In other words, by putting our program in different environments we can change the implicitly executed intermediate functions -- the dark matter.
So what is the problem? The problem is that we would like to be able to influence dark matter from this same program and for that purpose we need a programming approach which would treat dark matter as integral part of this same program. We would like to be able to develop our own domain specific dark matter which is adapted for the purposes of this specific program and this specific application. Why it is important? Because we accept the following postulate:
Hidden intermediate functions executed during object access are as important as normal explicitly invoked object methods and they can account for a great deal of even most of the program complexity
If this assumption is true then the focus of programming moves to developing intermediate functions rather than end object methods. In other words, the shift of paradigm is that it is more important what happens in-between than what happens at the end (of method invocation). Any method invocation turns in this case into a sequence of intermediate actions where the method itself is only the last step. Here are some well known examples of functionality which should belong to a different layer invisible from the main layer of business logic and therefore treated as dark matter:
- Persistent layers. We want to manipulate objects as if they were normal objects but change the way how their state is managed.
- Object containers. We want to be able to develop application specific containers where objects will live by retaining the possibility to access them as if they were directly accessible objects.
- Custom memory managers. Why to use the standard heap provided by OS if we have a number of application specific requirements for this unit? It would be much more productive to develop our own memory manager which is adapted for objects of this and only this program.
- Domain specific compiler. We want to change the way how references and other constructs in our program are implemented. For that purpose we can change the compiler but it is normally either not possible or is very difficult. It would be interesting to have a possibility to develop a compiler/interpreter as an integral part of this same program.
To solve this and many other problems a new programming paradigm has been developed -- Concept-Oriented Programming (COP). Within this approach, dark matter plays central role and is made integral part of the program. In COP, the programmer is not only able to influence what happens behind the scenes but rather it is his main concern. In other words, the main task in COP is creating an application specific environment where objects will live. This includes such functionality as object representation (custom references) and object access (custom access procedures). The functionality of this domain specific environment is triggered implicitly by injecting the necessary functions into all internal objects.
It is important that COP generalizes OOP by retaining its main principles and features. COP introduces a new programming construct, called concept, which generalizes conventional classes. Concepts in COP exist within a hierarchy defined by means of inclusion relation which generalizes classical inheritance. It is also important that COP allows for describing cross-cutting concerns in a manner which is different from AOP (and even can be said to be opposite to AOP). And what is more, COP is a basis for a new data model -- Concept-Oriented Model (COM) -- by significantly decreasing the incompatibility between conventional data and programming models (impedance mismatch).
Links: