Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles / Languages / XML

Architecture of a Remoting Framework

4.95/5 (38 votes)
30 May 2014CPOL33 min read 56.2K   522  
This article presents important architectural decisions that must be considered if you want to create a really expandable remoting framework and, well, those decisions may apply to any kind of framework.

Background

I already wrote remoting libraries and frameworks many times and in different languages. In general I must deal with very constrained environments that can't use big libraries, too much memory or with performance critical scenarios and, well, the most common frameworks aren't usually suitable for those environments.

I will not lie, my frameworks weren't always perfect (if such perfection really exists). Sometimes I was focusing in doing too much, effectively creating the wrong dependencies in the code. Sometimes I was focusing so much in the best performance that I was stripping too many functionalities from the solution. Now, considering that my last articles talked about architecture and about expandable solutions, I am going to present the architecture of a remoting framework that's really small and at the same time very expandable, being extremely useful for applications that need the minimum memory and performance overhead and and also being perfect if in the future the requirements change and you need to use it for servers that may receive thousands of requests per second.

Important Note

In this article I will discuss important decisions I've made, which allows the final solution to be expanded without requiring changes to the presented code.

Yet in this article I am not giving all the possible expansions or the code with the best performance, only the basic implementation. I consider this to be the best presentation for an article, as the purpose is not to explain how to achieve the best performance or how each expansion works, only to give hints on what's important so the solution can be "closed" for modifications and still allow lots of expansions.

So, if you are happy with WCF and don't want to try this solution because it is incomplete, well, I understand, but remember that the code presented here is not as complete as the code I use in production (but the parts presented here are already ready for production). Also, if you think it is simply a waste of time to write (or even understand) another remoting/communication framework because it will never be as complete as other solutions, well, I hope it is still useful for those interested in learning how it works, as the architectural decisions made here apply for many kinds of solution.

Defining the basic interfaces

In this first step I will only define the interfaces of the solution. We can say this is very similar to creating the class diagrams using UML but, well, when finishing the UML diagrams people usually start writing classes and I really want to keep things as interfaces. It is well known that we should write to interfaces, not to implementations, yet people usually understand interface as the public methods of a type, saying that we can reimplement those methods without changing the clients.

But I really want to give a closed solution that can be expanded or even replaced, so I want to give the interfaces first, which can be implemented or reimplemented without requiring any changes to the interfaces (or the library where they will be).

To do it, I start by thinking in the most basic usages I expect from the library. I expect users to do things like:

C#
ICalculator calculator = remotingClient.GetService<ICalculator>();
decimal result = calculator.Sum(55, 2);

If they actually have an ICalculator interface in their client application, or if they don't have such an interface, I expect them to do something like this:

C#
object result = remotingClient.Invoke("Calculator", "Sum", 55, 2);

Based on this, I can easily extract an IRemotingClient interface like this:

C#
public interface IRemotingClient
{
  T GetService<T>();
  object Invoke(string serviceName, string method, params object[] parameters);
}

By having such an interface I can already write many different implementations. I will be free to write a fake implementation that returns local objects and maybe uses reflection to call the local methods, I will be able to write an implementation that communicates through TCP/IP, named pipes, that uses XML serialization, binary serialization etc.

But having an interface as simple as this will not help implementers of the interface to write the classes in the right way. It will be very easy that someone writes an implementation that's TCP/IP only, that only supports methods by their direct names not supporting overloading, maybe using a specific serializer, like the BinaryFormatter without having the possibility to change it.

We can of course consider this as an implementation error. The interface is only telling us that it needs those two methods, but what I want is something that already looks as expandable as possible. The actual interface only means I can have different implementations, but it doesn't say anything about expansions to existing implementations.

The other interfaces

I can already see that when I do a GetService call the code must create a proxy. That is, the code must create a local object that implements the given interface and then redirects to a remote server.

I can also see that to invoke a remote method the code must create some kind of message, must serialize it in some format (binary, xml etc) and must send it over some kind of communication protocol (tcp/ip, named pipes, etc).

I consider this to be a lot of responsabilities and the best thing to do is to delegate those responsibilities to their own interfaces. We don't want a remoting client to be the responsible for the proxy generation, for the message serialization and for the transport. This will cause an "explosion of classes" if we need all the combinations or, more probably, this will limit our solutions to only some, like a binary implementation that supports method overloading by using named-pipes or a xml implementation that doesn't support method overloading and works over tcp/ip.

So, the responsibilities I see are like this:

  • MessagePorts - Send/Receive a message;
  • Serialization - Transform a message to/from some kind of byte-array;
  • Transport - Send a byte-array to a remote address/receive data from that address;
  • Proxy generation/loading - When requesting for a service, a local proxy must be returned, be it by loading a pre-compiled proxy library or by creating one at run-time;
  • Name provider - Discovering the remote name of a service from an interface must also be delegated. So, is it an ICalculator discovered by the name ICalculator, Calculator or by a name like "CalculatorService"?
  • The Remoting Client itself - Its responsibility is to create the message to be sent through the message port and process the result, delegating all the other jobs.

And, if you notice, I didn't say that the messageports send bytes. They send messages (objects). I am not forcing the implementations to first serialize the data to then use the message port and I am doing it on purpose, as for communications inside the same Application Domain we may simply give the message object from one thread to another, without actually serializing anything.

Also, the serialization process can be implemented in many ways. Some serialization algorithms require a stream to be given, they write to that stream and depending on the case this is actually sending the data directly. But that's terrible for TCP/IP, for example. For TCP/IP it is better to buffer the data before sending it (this can actually be done by using a decorator, but it is not the ideal form). The serialization can simply write things to memory and then return a byte-array. It could even write directly to unmanaged memory.

So, I am not forcing the message ports to use any of those alternatives, yet I am giving an interface for the version that I consider the most common one, the Memory Serialization. I consider it the most common one because remote method calls usually have a limited size. If we need to transfer 500mb of data we should write our code to transfer small blocks at a time (so we can say that each request will be a call like GetBlock(blockNumber)).

I am not really forcing the message ports to use that interface but, if they do, I can actually write lots of message ports that use a standard way of serialization, without telling which serialization framework to use.

Comparing name providers to WCF attributes

At this moment I think it is very important to make a comparison to WCF. I am using an interface as the name provider. I am not using [OperationContract]s, [ServiceContract]s or similar attributes.

There are many reasons to do this:

  • Even if I did not show the server yet, services must be registered in the server, telling by which interface they are found. So, there's no need to put a [ServiceContract] on the interface as it needs to be explicitly registered;
  • If you don't want to publish all the methods that exist in an interface, you are always free to create another interface with less methods or to use a name provider that returns null names for those methods that should not be published, but those name providers are free to use any client-specific rule to do that, not only attributes;
  • Name providers also allow the clients to use the same service interface to connect to two or more servers that provide the same service but use different naming conventions. For example, one server may use prefixes in all methods while the other may use different prefixes or not use prefixes at all;
  • In the end, a name provider can still use those [OperationContract] and [ServiceContract] attributes, it only doesn't require them;
  • Logically speaking, interfaces are contracts. I don't see a reason to put a contract on a contract. I can say that I see putting an [OperationContract] on an interface method as bad as putting yet another attribute on an [OperationContract] method to say that it should be visible only to local connections or to external connections, like a [WanPublished] or [LanPublished] attribute.

My conclusion is that by delegating how the service and action names are discovered we are allowing client interfaces to be free of attributes and we are also making the framework more prepared to deal with different use cases, which is a good thing.

The client interfaces

So, to present the client interfaces up to this moment, they are here:

C#
public interface IClientNameProvider
{
  string GetServiceName(Type serviceType);
  string GetActionName(MethodInfo method);
}

public interface IMemorySerializer:
  IDisposable
{
  byte[] Serialize(object data, int headerSize, out int offset, out int length);

  object Deserialize(byte[] buffer, int offset, int length);
}

public interface IMessagePortClient
{
  object SendMessage(object message);
}

public interface IRemotingProxyProvider
{
  Type TryGetServiceType(IRemotingClient remotingClient, object proxyInstance);

  object GetProxy(IRemotingClient remotingClient, Type serviceType);

  T GetProxy<T>(IRemotingClient remotingClient);
}

public interface IRemotingClient
{
  IClientNameProvider NameProvider { get; }

  object GetService(Type serviceType);
  T GetService<T>();

  object InvokeAction(string serviceName, string actionName, params object[] parameters);

  Type TryGetServiceType(object proxyInstance);
  // This is the only method I don't like in this interface,
  // but I think it is really useful and it allows me to
  // forbid users from using the ProxyProvider directly.
}

The Factories

All the interfaces will, at some moment, be implemented by classes. But I don't want to force the users to know those classes. In some applications, all the user may want to do is to say "I want a RemotingClient to connect to someAddress".

And, to do this and also help most implementations to work properly, I do it through factories. The factory for the IRemotingClient looks like this:

C#
public static class RemotingClientFactory
{
  public static IRemotingClient Create(string remoteAddress)
  {
    return Create(remoteAddress, null, null);
  }
  public static IRemotingClient Create(string remoteAddress, IClientNameProvider nameProvider=null, IRemotingProxyProvider proxyProvider=null)
  {
    var messagePort = MessagePortClientFactory.Create(remoteAddress);
    return Create(messagePort, nameProvider, proxyProvider);
  }
  public static IRemotingClient Create(IMessagePortClient messagePort, IClientNameProvider nameProvider=null, IRemotingProxyProvider proxyProvider=null)
  {
    if (messagePort == null)
      throw new ArgumentNullException("messagePort");

    var handler = RemotingClientFactoryConfiguration.Delegate;
    if (handler == null)
      throw new InvalidOperationException("The RemotingClient factory is not configured correctly.");

    if (nameProvider == null)
      nameProvider = ClientNameProviderFactory.Create();

    if (proxyProvider == null)
      proxyProvider = RemotingProxyProviderFactory.Create();

    return handler(messagePort, nameProvider, proxyProvider);
  }
}

public delegate IRemotingClient RemotingClientFactoryDelegate
  (
    IMessagePortClient messagePort, 
    IClientNameProvider nameProvider, 
    IRemotingProxyProvider proxyProvider
  );

public static class RemotingClientFactoryConfiguration
{
  public static RemotingClientFactoryDelegate Delegate { get; set; }
}

With this approach we have a starting point. Users that want to start the communication can simply use the RemotingClientFactory.Create() giving a remote address or, if they want some more control, giving the proxy provider, the specific name provider or message port (and without actually having to give all of them) and they don't need to know the type that actually implements the IRemotingClient interface.

Having such parameters also shows that the IRemotingClient implementations are expected to support them. We aren't really forcing the implementations to use them, but we show what a good implementation should support, which is our expansion guarantee. Someone could easily write a new message port and it will work with an already existing remoting client thanks to the contract provided by this factory.

Analysing the actual design

At this moment we have most of our solution made of interfaces and some delegates. The only classes that we have are static classes that work as the configurable factories. Some people think that static classes are bad but considering they are completely configurable through delegates I can't consider them bad, we are only giving a starting point to our users, yet they are free to replace the implementation to create fakes for their tests or for any reason they have.

I also divided the factories in two static classes because I expect users to use only the RemotingClientFactory frequently. The RemotingClientFactoryConfiguration must be configured only once during the entire application lifetime, so it should not be seen all the time.

This completely replaceable design is, in my opinion, already better than many big and world widely known solutions, which actually start with some static factory that can't be configured or by an actual implementation that has only some extension points but do most of the job (like the proxy generation) without letting the users to give a better implementation.

Also, those interfaces can be seen as:

  • Respecting the Single Responsibility Principle (SRP). Considering that classes should implement only one of those interfaces at a time, they will have a very limited task. A change in the tcp/ip communication will not affect the class responsible for getting the names of the services or how the proxies are generated;
  • Respecting the Open-Closed Principle (OCP). I don't need to modify these interfaces. I can give them in a compiled library, be it with a default implementation or not. Yet you will be free to add new message port types, to create your own naming rules etc and use it with any implementation of a remoting client that uses those interfaces;
  • Respecting the DRY (Don't Repeat Yourself) principle: If I can create a new naming rule and I can apply it with all my already existing remoting clients, message ports etc I am avoiding repetitive code (like creating separate interfaces and adapters only to respect different naming rules). Also, each "combination" of those interfaces could be an entire new implementation if I decided to put all the methods in a single interface, so I am avoiding an "explosion" in the number of implementations;
  • Respecting the KISS (Keep It Simple, Stupid) principle. Well, respecting the Single Responsibility Principle actually helps with this principle, as each class do the "bare minimum" to work.

And there are probably many other principles the actual design must be following. And, well, maybe this design is violating some principles too... there are so many principles, that actually we have some that contradict each other, so it is impossible to correctly follow all of them.

Many things to consider (or not)

In the list of things to consider we have performance, thread-safety, lifetime management, stateless or stateful, asynchronous operations, reconnections and many others.

Performance

Making everything virtual (through the use of interfaces) can always affect performance, but I can't say it is that problematic. There are many things that affect performance more drastically, like the kind of message port used, the serialization of data, the kind of message that's generated (if we can replace a method name by a number we may serialize less data and make it faster to find the method on the other side) and things like that. In fact, each implementation can have different priorities (be faster or be more human readable, for example) and by making things virtual we allow to replace a fast solution by an even faster solution in the future, so the little overhead of virtual calls is not a big deal.

Thread-safety

The interfaces don't say anything about thread-safety and I really believe they should not say anything about it. Simple clients (even big applications are sometimes simple clients) may never require remote access concurrently. So, non thread-safe implementations will work for this kind of application and they can be faster by not trying to deal with thread synchronization. Yet, if needed, many solutions for thread-safety may be applied. We can write decorators for the message port, effectively applying locks over the SendMessage calls (but in this case all the calls will be serialized, not parallel) or we can really have message ports capable of sending data in parallel.

But the thread-safety must also exist on the server... I didn't explain the architecture of the server yet, but all services will be simply registered as "singletons" where it is the responsibility of the service to be thread-safe. Calls in parallel will simply be executed in parallel by the server, considering the messages can arrive in parallel.

Stateful or Stateless

Actually the GetService calls expect to always return the same singleton instance so we can say that this is mostly stateless. Yet it is still possible that calls to a service return stateful objects. If that happens, it will be the responsibility of a RemotingClient implementation to deal with this (or even the responsibility of a serialization implementation, as we may want to support returning a serializable object that has a reference to a stateful one... something complex, but possible).

So, to conclude, by default I really expect the services to be stateless and, as all the services are singletons, there's no lifetime management. As soon as a call ends, all the objects used during the call can be collected.

Connection losses and reconnections

There are two ways to deal with reconnections. Or the applications recreate the remoting clients or the message ports must be implemented to deal with it.

I personally prefer a message port capable of reconnecting if needed. If we really have stateless calls, there's no problem of reconnecting after a connection loss.

Async

The only thing that's really problematic is the asynchronous problem. The presented interfaces so far have synchronous signatures. I really believe Async/Await Could Be Better if the signatures could stay as synchronous signatures but as it works today we need to change all the return types to make the methods asynchronous. So, should we support async calls?

If we simply change all the return types we will of course support async, but simply making everything async for no reason will degrade performance. Synchronous methods made async add overhead as generating a task for synchronous result is expensive (believe me, for some big loops it is a huge difference). Also, users that actually have dedicated threads may prefer to block the thread until the job is done instead of dealing with async calls. So, in the client, I see that we must support both sync and async calls.

On the other hand, I think the servers must be asynchronous. In fact, if the server is expected to deal with only some clients it may be faster to use synchronous calls with dedicated threads. But the server is more complicated as the message port may be receiving messages synchronously or asynchronously, yet processing the message may be either synchronous or asynchronous, so I will left this problem for another moment.

My decision was to change the IRemotingClient interface to be like this:

C#
public interface IRemotingClient
{
  IClientNameProvider NameProvider { get; }

  object GetService(Type serviceType);

  Task<object> GetServiceAsync(Type serviceType);

  T GetService<T>();
  Task<T> GetServiceAsync<T>();

  Type TryGetServiceTypeFromProxyInstance(object instance);

  object InvokeAction(string serviceName, string actionName, params object[] parameters);
  Task<object> InvokeActionAsync(string serviceName, string actionName, params object[] parameters);
}

Which also made me change the IRemotingProxyProvider:

C#
public interface IRemotingProxyProvider
{
  Type TryGetServiceTypeFromProxyInstance(IRemotingClient remotingClient, object instance);

  object GetProxy(IRemotingClient remotingClient, Type serviceType);

  Task<object> GetProxyAsync(IRemotingClient remotingClient, Type serviceType);

  T GetProxy<T>(IRemotingClient remotingClient);
  Task<T> GetProxyAsync<T>(IRemotingClient remotingClient);
}

and the IMessagePortClient:

C#
public interface IMessagePortClient
{
  object SendMessage(object message);

  Task<object> SendMessageAsync(object message);
}

I know that there are too many "duplicated" methods. And in some cases we will have "duplicated" implementations (one calling sync methods and the other calling async methods) or one implementation redirecting to the other.

Some may believe it would be better to have the sync and the async APIs as completely separated things and, if needed, write adapters. But I really expect the remoting clients to be capable of dealing with both kinds of calls. If one call redirects to the other, it is their responsibility to deal with it.

The Implementation

Before even looking at the server, I want to present a little about the implementation.

For example, a very simple implementation of the IClientNameProvider could be like this:

C#
public sealed class ClientNameProvider:
  IClientNameProvider
{ 
  public string GetServiceName(Type serviceType)
  {
    if (serviceType == null)
      throw new ArgumentNullException("serviceType");

    return serviceType.Name;
  }
  public string GetActionName(MethodInfo method)
  {
    if (method == null)
      throw new ArgumentNullException("method");

    return method.Name;
  }
}

It will simply return the name of the interface and the name of the method. It will not try to deal with overloads, generics or anything like that.

For a simple TCP/IP message port we can implement it like this (being more precise it is an implementation over any stream, which works for TCP/IP too):

C#
public sealed class MessagePortClientOverStream:
  IMessagePortClient
{
  private Stream _stream;
  private IMemorySerializer _serializer;
  private readonly object _disposeLock = new object();
  public MessagePortClientOverStream(Stream stream, IMemorySerializer serializer=null)
  {
    if (stream == null)
      throw new ArgumentNullException("stream");

    if (serializer == null)
      serializer = MemorySerializerFactory.Create();

    _stream = stream;
    _serializer = serializer;
  }


  public object SendMessage(object message)
  {
    if (message == null)
      throw new ArgumentNullException("message");

    int offset;
    int length;
    var bytes = _serializer.Serialize(message, 4, out offset, out length);
        BitConverterEx.FillBytes(bytes, offset, length-4);
    _stream.Write(bytes, offset, length);

    _stream.FullRead(bytes, offset, 4);
    length = BitConverter.ToInt32(bytes, offset);
    if (length > bytes.Length)
      bytes = new byte[length];

    _stream.FullRead(bytes, 0, length);

    var result = _serializer.Deserialize(bytes, 0, length);
    return result;
  }

  private bool _inAsyncCall;
  public async Task<object> SendMessageAsync(object message)
  {
    if (message == null)
      throw new ArgumentNullException("message");

    if (_inAsyncCall)
      throw new NotSupportedException("This message port doesn't support two pending async calls. Await the first call before doing the second async call.");

    try
    { 
      _inAsyncCall = true;
      int offset;
      int length;
      var bytes = _serializer.Serialize(message, 4, out offset, out length);
      BitConverterEx.FillBytes(bytes, offset, length-4);
      await _stream.WriteAsync(bytes, offset, length);

      await _stream.FullReadAsync(bytes, offset, 4);
      length = BitConverter.ToInt32(bytes, offset);
      if (length > bytes.Length)
        bytes = new byte[length];

      await _stream.FullReadAsync(bytes, 0, length);

      var result = _serializer.Deserialize(bytes, 0, length);
      return result;
    }
    finally
    {
      _inAsyncCall = false;
    }
  }
}

Actually, what I considered to be the hardest part is the creation of the proxy. In fact, I already have a personal solution that implements interfaces at run-time and it is very fast, but the solution that comes with .NET, even if it is not as fast, is relatively simple to use but not as well documented.

We must implement a RealProxy class that has an Invoke method redirecting to the IRemotingClient.InvokeAction and then get a transparent proxy for it. The code to do this is the following:

C#
// Note: This is not the final implementation.
public sealed class RemotingProxy:
  RealProxy
{
  private IRemotingClient _remotingClient;
  public RemotingProxy(IRemotingClient remotingClient, Type serviceType):
    base(serviceType)
  {
    _remotingClient = remotingClient;
  }

  public override IMessage Invoke(IMessage msg)
  {
    // Personally I don't understand why the message comes with the wrong cast.
    var message = (IMethodCallMessage)msg;

    var result = _remotingClient.Invoke(typeof(T).FullName, message.MethodBase.Name, message.InArgs);

    // In fact, if the method is async we should be using the _remotingClient.InvokeAsync, but
    // I will let you look at the real implementation by downloading the source code.

    return new ReturnMessage(result, null, 0, null, null);
  }
}

And with such proxy we can already create local implementations that redirect the calls to the remote server. It is enough to do something like:

C#
var realProxy = new RemotingProxy(remotingClient, typeof(ISomeInterface));
ISomeInterface objectToUseForRemoting = (ISomeInterface)realProxy.GetTransparentProxy();

And from that point on, simply use the objectToUseForRemoting through its interface.

And to implement the IRemotingProxyProvider, we do this:

C#
public sealed class RemotingProxyProvider:
  IRemotingProxyProvider
{
  public Type TryGetServiceTypeFromProxyInstance(IRemotingClient remotingClient, object instance)
  {
    var realProxy = RemotingServices.GetRealProxy(instance);
    var ourRealProxy = realProxy as RemotingProxy;

    if (ourRealProxy == null)
      return null;

    if (remotingClient != ourRealProxy._remotingClient)
      return null;

    return ourRealProxy._serviceType;
  }

  public object GetProxy(IRemotingClient remotingClient, Type serviceType)
  {
    var realProxy = new RemotingProxy(remotingClient, serviceType);
    return realProxy.GetTransparentProxy();
  }

  public Task<object> GetProxyAsync(IRemotingClient remotingClient, Type serviceType)
  {
    return Task.Factory.StartNew(() => GetProxy(remotingClient, serviceType));
  }

  public T GetProxy<T>(IRemotingClient remotingClient)
  {
    return (T)GetProxy(remotingClient, typeof(T));
  }

  public Task<T> GetProxyAsync<T>(IRemotingClient remotingClient)
  {
    return Task.Factory.StartNew(() => GetProxy<T>(remotingClient));
  }
}

Finally the RemotingClient is only glueing things together... well, doing this is actually a lot of work, but at least it is not doing other things too. I will not show its code here, so load the sample if you want to see its code.

The Server

I started the server by writing the IRemotingServer interface. My original idea was that the user would call it like this:

C#
remotingServer.RegisterService(typeof(IInterface), new RealImplementation());

So I immediately added such register method to the interface. I also considered making such method generic, so the given intance could have its type validated at compile-time, but I actually don't like creating generic methods only to do the validations and I always like to have a non-typed solution because in some situations it is easier to do the call without knowing the type at compile-time.

Then, by using the same principle I used in the IRemotingClient interface I thought "can the user register a dynamic service?" I mean, instead of having an interface and an instance that actually implements it, should the user be able to build a service at run-time by simply giving the service name, the action name and a delegate?

In my opinion, it should be possible, simply because I don't know what the users of this library really need to do. Maybe the user is capable of building a service by reading a text-file with the method names and with very basic expressions and I don't want to force him to create real types at run-time only to do that. So I added the RegisterAction method to the interface.

Now the important thing is: How will the server start receiving requests?

I was divided into putting a ProcessMessage on the interface or putting an AddListener() to the interface.

The AddListener() seems to be the most natural way of doing it. The ProcessMessage() gives me the possibility to simulate the calls without actually having a client, so it is even more losely coupled.

Yet, not putting the AddListener() seemed too strange to me, as it would simply look like the server class is not a server at all. So I decided I should put an AddListener() and so a ProcessMessage() became unnecessary.

So the basic server interface ended-up like this:

C#
public interface IRemotingServer:
  IRemotingDisposable
{
  void RegisterService(Type serviceType, object implementation);
  void RegisterAction(string serviceName, string actionName, RemotingAction action);

  void AddListener(IMessagePortListener listener);

  void RegisterImplementationSearcher(ServiceImplementationSearcher searcher, decimal priority);
  void RegisterActionSearcher(ServiceActionSearcher searcher, decimal priority);
}

The IMessagePortListener

On the server side of an IMessagePortClient we have both an IMessagePortServer and an IMessagePortListener. This is because a server can receive more than one connection from the same listener.

The listener is the class that, in my mind, should be asynchronous. I initially though about putting an AcceptAsync() method that would return a Task<IMessagePortServer> but I abandoned such an idea as that would put the responsibility of looping to accept connections on the client.

So I did something even more asynchronous. To start it receives a delegate to process the messages and an optional delegate to inform when a client is connected.

With such processMessage, it is not important when or how a message is received. If the internal implementation is synchronous or asynchronous, it simply needs to invoke the processMessage delegate and it will do the job.

But, what if to process a message we end-up calling an asynchronous method?

This thinking made me change the processMessage delegate to always return a Task<object> that could be awaited, so I declared it like this:

C#
public delegate Task<object> ProcessMessageHandler(IMessagePortServer server, object message);

Later I did some tests and it worked great when it was really async, but my first tests it was too slow if the message port was implemented as synchronous. The truth is that I was doing a bad implementation. The processMessage handler doesn't need to be async to be able to create tasks. By using the Task.FromResult for synchronous calls I was able to get a really good speed. To give some real info, by using an asynchronous message port I was already doing single-thread communications about twice as fast than with WCF... and with synchronous it became almost four times faster (so, almost 8 times faster than WCF).

Note: Using a synchronous message port is not scalable but if your requirement is a very fast local server, then it works great. And it is incredible how many servers actually only deal with local connections, as it is very common in SOA architectures to have one server receiving external calls (so being asynchronous is a must) and then the other servers are only receiving local connections from that server, so having one thread per connection is not that bad for those other servers.

The other details

In the server we have other details to take care of. What happens if, for example, a client invokes an action that's not present in the server?

Should we simply return an error? Well, if you look at the article Software Architecture, you will see that I consider that we should always give an event that allows the action to be done instead of failing. Only if the event isn't capable of completing the action we should allow a failure.

And even if it may look like an implementation detail, I wanted to make it clear that the implementations should support invoking an action that was not previously registered. But the truth is: Such unregistered action may be the responsibility of the name provider to deal with, not of the IRemotingServer.

This can happen if, for example, we have a generic method that doesn't use the generic type in one of its parameters (a method like Get<T>() is an example, we can't infer the generic parameters from the usage). In this case, we may not register a valid action for all the valid T possibilities, yet the name provider may be able to identify that a Get<int> is actually looking for an int implementation of the method Get<T>.

So, I decided that the name provider should be divided into client and server, effectively creating the IServerNameProvider, which must also be capable of returning a service-type from an unregistered service name or to return method-infos to unregistered actions. But, well, this should only work for types that the user tried to register and that may have alternative names, not for any type loaded in memory.

Finally, if the name provider does not find a result, there the last chance event in the IRemotingServer, which can still give an action for the requested actionName.

Note: The RegisterSearcher is what actually registers the event. I will explain later why it is not a normal event.

At this moment, the IRemotingServer and the IServerNameProvider interfaces are like this:

C#
public interface IRemotingServer
{
  void RegisterService(Type serviceType, object implementation);
  void RegisterAction(string serviceName, string actionName, RemotingAction action);

  void AddListener(IMessagePortListener listener);

  void RegisterImplementationSearcher(ServiceImplementationSearcher searcher, decimal priority);
  void RegisterActionSearcher(ServiceActionSearcher searcher, decimal priority);
}

public interface IServerNameProvider
{
  string RegisterServiceType(Type serviceType);
  string RegisterServiceMethod(Type serviceType, MethodInfo method);

  Type TryGetServiceType(string serviceName);
  MethodInfo TryGetMethod(Type serviceType, object serviceInstance, string actionName);
}

And the implementation, well, I will let you look at in the zip file. I don't consider it hard as each class is doing very little, yet it is not as easy as the client implementation.

The interfaces, the implementations and the libraries

I always consider a bad thing to create too many libraries. In most projects, people prefer to put a single library in their application instead of putting 2, 3 or more libraries. In some cases the developers prefer to put a single .cs file in their projects to avoid a DLL reference, even if such file has many classes inside it.

At the same time I know that some people hate having to reference a big library when they only want one or two classes of that library.

Personally, I see things this way:

  • Client applications don't need the server interfaces;
  • Server applications don't need the client interfaces;
  • Someone writing the implementation to all client interfaces don't need the default client implementations;
  • Someone writing the implementation to all server interfaces don't need the default server implementations.

And if I really respect that I will end-up with 4 libraries (in fact, 5, as my implementations actually depend on another library).

But I decided to reduce this a little. The client and server interfaces are in the same library. The client and server implementations are in the same library. But the implementation and the interfaces are isolated in their own libraries.

This is really to show that if you don't like my actual implementation you can completely rewrite it... yet the clients can continue to depend on the interfaces and everything will work fine.

IRemotingDisposable

One thing that was missing until now was the "stopping" of the server, the "closing" of the message ports etc.

To support that it is extremely easy to implement the IDisposable interface, and I really think that all the important interfaces should be disposable.

This means that the IRemotingClient, the IMessagePortClient, the IRemotingServer, the IMessagePortListener and the IMessagePortServer should be disposable.

But, in particular when closing the message ports, I wanted to be able to detect that and also stop the remoting client attached to it, or to stop listening on a message port listener if it was disposed without actually disposing the server. I also know that many users may have their own reasons to be informed when that happens.

To do that I wanted an observable disposable, that is, an object that notifies when it is disposed. For a long time I have the interface IObservableDisposable in my most basic library, yet I didn't want to put such reference on the interfaces as I understand other developers may not want to use my other library and a simply reference to an interface will be creating the need to put that library together.

So, to support that correctly I created the IRemotingDisposable interface, which all the remoting interfaces must depend. It is an IDisposable with the additional IsDisposed property and the Disposed event, so you can be notified when it is disposed.

One important thing that the interface doesn't tell by its own is that if the object was already disposed, registering to the Disposed event must invoke the handler immediately instead of really registering it.

In the implementations, as I am actually using my base library, I support both the IRemotingDisposable and the IObservableDisposable that comes from my base library, but when using it in your code, you don't need to reference my other library at all if you don't want to.

Note: The Pfz library is now very small. It is only a base class library that gives support for Managed Thread Synchronization, some useful collections (like ThreadSafeHashSet and the ThreadSafeDictionary, which is similar to the ConcurrentDictionary but without its problems and has some more methods [it is presented at Dictionary + Locking versus ConcurrentDictionary]) and the ReflectionHelper class, very useful if you want to create very fast delegates to access methodinfos and, well, all those are used by the actual implementation.

Sync over async, Async over sync

The message port support both sync and async calls, but how do we invoke methods that have a sync signature as if they are async (or the opposite)?

Well, the first solution is to use the InvokeActionAsync method. Unfortunately, in this case you must know how the serviceName and actionName will be encoded and you must also fill the right number of arguments without intellisense support.

I was thinking about putting another helper class to invoke the asynchronous methods by giving an expression. I even made it work, but I didn't like the result. Creating and parsing the expressions for everycall is a big performance hit.

The best solution I can think of is to create another interface with the same name but inside another namespace, and with all the same method signatures, only changing the return types from void to Task and of other types to Task<ThatOtherType>.

So, by creating an ICalculator like this:

C#
public interface ICalculator
{
  Task<decimal> Sum(decimal a, decimal b);
}

We will be able to get a service for it and then do a clean async call like this:

C#
await asyncCalculator.Sum(55, 2);

The only important thing is that you should use a name provider that's capable of generating the same names for the sync and the async interface.

As the default one only uses the type name, not the namespace, creating an interface with the same name inside an Async namespace will work, but you may use a name provider that removes the Async from the type names, so you can put both interfaces in the same namespace or you can even use one that removes the Async from the method names, so you can put both sync and async methods in the same client interface while they still call the same methods in the server.

The "search events"

The events in this library are very peculiar. Normal events use the event keyword which internally is composed of add and remove methods, they don't have priorities and if we follow the microsoft guidelines an event handler starts with an object sender parameter.

So, why am I not doing this? My events are actually achieved through those RegisterSearcher methods, the handlers don't receive a sender, the args aren't inheriting from the EventArgs type and I use a priority. Doesn't it seem like a big violation of the guidelines?

Well, they are violating the guidelines, but I was considering many other factors. For example, most events are a notification only event, they don't expect results to be generated, so the order of execution of the handlers is not important.

But the searchers are a special kind of event. They should try to provide a result. And when considering it is possible to give a generic searcher during initialization, I must consider that someone may want to register a more specific searcher after that registration, yet it should run before. Simply executing them in the inverse order of registration is not useful either... I don't really know if the registration that came after is more specific than the first registration.

So my solution was to use priorities.

Why no sender parameter

I use interfaces because I expect the implementations to be replaced and decorated. But what happens when I give an event handler to a decorator?

The decorator may pass the event handler directly to the real object. But, if it does that, then when calling the event the sender will not be the decorator, it will be the real instance. We can consider this as the right thing (as it will be the real sender), but it may be strange to put the event on a thread-safe decorator and receive as the sender the thread-unsafe implementation.

Of course, the decorator may be implemented differently, effectively registering itself on the event of the decorated object and having its own event, so when the event is called, the decorator intercepts it and calls its own event implementation, correcting the sender. But, if we do this we are making the generation of decorators much harder, as they should deal with priorities too. Also, which will be the priority of the decorator handler? If we think that the decorator puts its own handler in the main object, which can already have some priorities, we must use the right priority.

I can of course write a decorator per handler, but instead of complicating things I decided it is better to avoid the sender. If having a sender is really important, users can always create another object that has a reference to the right "sender" and then register a method of such object as the handler, which will then have access to the "sender".

Why no Unregister methods?

Personally, I think the Unregister methods cause too much trouble with almost no benefit. For example, if I register a searcher that is executed, gives a result (which is cached) and then is unregistered. Should the generated result be unregistered? And, if the answer is yes, how do we control this?

Even if I know how to do it, should we complicate the code for a situation like that? Honestly, how often will someone want to unregister a searcher?

So I can say that there are two ideas here. One is to make the library complete. As a library, we never know the needs or our users, they may always want more. And I can imagine a manager of a server deciding to unregister some services for some time.

On the other hand, I am trying to give the interfaces first, trying to help other developers do something that work with a minimum effort, so I don't want to complicate things. So, I can say that I am following the YAGNI - You aren't gonna need it principle. Unregistering things is too uncommon and we can easily make a work-around by creating a new object (like the RemotingServer) and registering all the needed items, except the one we want to "remove". Of course this will force everyone to be disconneced instead of only affecting the users that actually use the removed service, but it is a work-around, not a solution.

Finally, we can always create yet another interface with the extra methods and make an implementation that actually implement those extra methods. So, in the end, I think it is a matter of choice and my choice was to make things simpler.

Sample

The sample is a very simple application divided into a server with a Calculator service and a client that actually uses that ICalculator interface thousands of times, in synchronous mode and in asynchronous mode, also with a version of those tests using WCF.

Depending on how you run this sample you will see that WCF is running faster than my implementation. But, honestly, the code I am giving with this article in not optimized at all, it simply implements the interfaces to be functional and easy to understand, not to be fast. By simply replacing the .NET serialization by another binary serialization it is possible to make it much faster (if you look at the MemorySerializer source code you will see that I commented some code that used another serializer).

The really important point of the sample code is to prove this framework is working and to have a starting point for you to test it. By looking at it you will be able to notice that the ICalculator doesn't have any attributes while the WcfCalculator needs attributes. Also, I didn't use it, but you can register entire services by using the RegisterAction() method giving a service name and an action name together with a delegate and you can do the calls by using the InvokeAction or InvokeMethodAsync directly from the RemotingClient. That's something I don't know how to do easily in WCF.

Can I use this library to replace WCF?

As I said more than once the implementation I am giving with the article is very simple, without any kind of optimization. I really believe it is ready for production but for some very limited scenarios.

Considering it actually doesn't have time-outs, support reconnections, calling methods from multiple threads or any kind of security I can only say that it is useful for local communications, having to create one remoting client per thread if you need to use it by many threads. It can surely be improved to deal with more complex scenarios and it can outperform WCF with the right implementations, but you must give it a try before using it in production and you should also understand the interfaces of the main library. I am sure that it will be at least interesting to see how those simple interfaces can give a lot of possibilities.

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)