Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles / Languages / C#4.0

Parrots, A record and replay Mocking/Stubbing Library

4.83/5 (6 votes)
2 Oct 2011BSD16 min read 27.5K   252  
Parrots is a record and replay mocking/stubbing library introducing a new concept in the mocking area.

Introduction

Parrots is a library introducing a (possibly) new concept about "record and replay" mock/stub objects for unit testing. It is written in C# over .NET Framework 4.0, and it has been tested over C# samples only. It does not use any particular feature of the .NET Framework which was not available in 3.5 version, but for certain future features I'm considering using dynamic typing, so I will probably stick with 4.0.

Background

Let's suppose I cannot do TDD for whatever reason, and for the same reasons I end up writing "unit tests" but injecting "true" dependencies to my "system under test" (SUT). We all know that these are not “true” unit tests, but they happen to be a good development tool anyway and I don't see why I should not use them. So, should I at some point "convert" them to "true" unit tests by "mocking" the concrete dependencies with a good mocking library like Rhino Mocks? Probably yes, but again, maybe I have no time and, honestly, many times doing mocks is a quite long, tedious, complex and error prone activity, even more if I do not really care about mocks but only about supplying my SUT with stable and predictable dependencies. I just would like to have "stubs", forget about them and have my tests go green...

That's how Parrots gets involved: I go on writing "integration tests", which talk to the concrete dependencies I already have available, but later at some point I tell my tests, through Parrots, to "record" the conversations happening between SUT and its dependencies. As soon as I do it, the tests convert themselves to use the recordings Parrots did for me, and since then they will stop talking to concrete implementations and switch to the "imitations" Parrots did for me. Now you know why this curious name... :)

The first time you execute such a test, the SUT will talk to the concrete dependencies I configured with my preferred IoC container, and Parrots will be recording whatever information they exchange. The usage of an IoC container is very important, and it is mandatory so far because Parrots is able to use interception features all good IoC containers supply. It is possible that this part will evolve in slightly different directions, making the usage of IoC containers optional, but that's not the case so far, because I’ve been considering that doing inversion of control is always a good thing to do, so I hope this is not a big problem for potential users. Right now Parrots talks to Castle Windsor only, but it would be nice to have more of them.

Nothing new so far, but when we execute our test again, and without changing the unit test code, Parrots will enter, and it will redirect the dialogues between SUT and dependencies to "somewhere else", where recorded information can be retrieved and sent to the SUT transparently. The concept is similar to a mock, but the info exchanged is recorded and replayed. I’m using the “mock” term because it is largely known and understood, but technically it might not be the best way to describe this technique, because normally “mock objects” are programmable and are supposed to be able to deliver information about how they have been used, which Parrots does not. Probably it would be better to define them as “stub” objects, but I fear this does not communicate as much as “mock” word does. Anyway, Parrots approach is slightly different from both stub and mock objects, so a new term should be used... probably parrot objects would work :)

Dissecting Parrots...

Now that you have some ideas about what it is all about, let's go deeper to understand what Parrots can do, and how, by inspecting a sample unit test:

C#
[Test]
public void SimpleStringTest() { 
    var imitation = Trainer.Aviary
        .AddParrotFor<IAmSomeDependency>()
        .And.AddParrotFor<IAmSomeOtherDependency>()
        .Finally.RecordOrReplay();

    var sut = new ToBeTested(imitation.Resolve(), imitation.Resolve()); 
            
    var output = sut.Process("repeat", "wasp"); 
            
    Assert.AreEqual("waspwasp", output); 
            
    imitation.ReleaseParrots();
}

At the beginning of our test, we can see a "fluent" call creating an imitation, which is a sort of working session where configuration information is stored and from where we can get parrots. If we read it line by line, we can see that we have a parrots trainer owning an aviary, and for every dependency (interface) he needs to imitate he just adds a parrot in the aviary (AddParrotFor() calls). As soon as the trainer is done adding parrots, he finishes the configuration step returning an imitation through the RecordOrReplay() call. Behind the scenes several objects are created to keep track of parrots configurations and to give us a fluent interface, and the final method gives us the collector of all this information (the imitation).

Record

Now we are ready to proceed with our test as if we were dealing with a "classic" unit test. We instantiate our "system under test" (SUT) and we supply it with its dependencies (Resolve() calls). As already mentioned, Parrots concepts are completely based on "inversion of control" and "dependency injection" principles, and its currently available implementation lives on top of an IoC container. Today, it works with Castle Windsor only, but there is no hard-coded dependency on that, it is loaded dynamically through an ad hoc assembly and configuration directives:

XML
<facilities>
    <facility id="parrots" type="Parrots.Bridge.Castle.ParrotsFacility" />
    <facility id="ms-parrots" type="Parrots.Bridge.Castle.NUnitTestsParrotsFacility" />
    <facility id="mocking-parrots" 
      type="Parrots.Bridge.Castle.ParrotsMockingFacility" from="Parrots.Bridge.Rhino" />
</facilities>

The first, and most important, directive injects in Parrots the right interception engine, which leverages Castle Windsor and Castle DynamicProxy features; the second one tells Parrots which kind of unit test framework I am using; the last one configures the appropriate "true" mocks generation engine (see the last section of this document). I hope to support more different IoC containers soon.

Back to the Resolve() calls, we use our imitation in a way which is similar to what we do with a "mocks repository": we ask the imitation for available implementations of dependency contracts our SUT needs. Let's suppose it is the first time we launch our test: what does our imitation do? Well, for each requested interface, he looks for a corresponding configuration built through AddParrotFor() calls, and if it does not find any, it simply lets the IoC do the work for us, becoming "transparent". But if it does find a configuration, he creates a corresponding parrot, which is the truly interesting actor here. Like a real parrot, he just stands there side by side of the needed dependency, listening to every single conversation happening with the SUT, and recording it. Technically, an 'interceptor' is put in place, connecting the dependency and its imitating parrot, which will take note of everything. There are more technical details which should be discussed, along with still missing things, but I want to keep it simple here.

After the SUT setup phase, it goes on with usual code: there are calls towards the SUT and “asserts”. It is in this phase that parrots listen and record conversations, and this phase ends when the ReleaseParrots() method is called on the session. That method “frees” our parrots, but just before releasing them it gets all the interesting info they recorded and persists it on binary files with a specific structure and naming convention, and in a specific position. Those details are hard-coded so far, and based on conventions, but they will be more configurable soon. Please notice that it also supported the using idiom based on IDisposable, where the Dispose() call does the same ReleaseParrots() call do; you may check the unit tests code to see samples using it.

Replay

Ok, we are done with recording, and we decide to launch our test again. The start-up phase is the same, but when we get to the Resolve() calls something different happens, because the session checks if recordings are available for each parrot, and if it finds them it uses them to build parrots that 'already learned their lesson', and are able to repeat what they listened before. The true dependencies are still created, but the method calls are replaced by “proxy calls” based on recorded information, completely replicating the dependencies behaviors we obtained when we launched the test the first time. So, since the second execution of a test method, its dependencies are not executed anymore: our 'integration test' is becoming a true 'unit test'! I’d like to find a way to avoid the concrete instantiation of our dependency, this is an issue Parrots currently has which might be a problem in scenarios where dependencies do “stuff” in their constructors, I will work on it in the next weeks looking for a solution, I already have a couple of ideas and I just need some time to implement them.

A couple of considerations might be useful:

  • Conversation recording is not that simple, and conversations can be very complex in terms of parameters and return value types, we have to deal with delegates for callbacks, non serializable types, exceptions and so on. Most of those details have been faced and implemented, but maybe something is missing or not yet perfect.
  • If we record a test and later we change its code introducing or removing new calls or asserts, chances are that our recording will not be correct anymore: our parrot will not be able to repeat things he did not listen. The idea is that every single modification in our test that changes the interaction sequence with our dependencies requires repeating the recording for the modified test(s). Here the “contract” is quite strong: even changing the calls sequence is a breaking change of the whole imitation, and requires recording again the test. Let's dig deeper about this point.

Call Sequence Equivalence

As already mentioned, Parrots does its job recording every single conversation happening between our SUT and its dependencies under imitation. When a recording is available and we launch a test, Parrots starts comparing the "call sequence", and at every single step (call) it retrieves what has been recorded about that step, in terms of which dependency was the target, which method was called, which parameters were supplied and which return values the method produced. Every single unit test methods gets its own call sequence, and Parrots is happy only if the whole sequence is replayed exactly as it has been recorded the first time. If, for example, we slightly change our unit test after it has been recorded, and our changes modify the call sequence, Parrots will throw an exception and our test will fail. I decided to be quite strict about this because, IMO, such a unit test should be treated by Parrots as a contract, and changing the contract invalidates the recordings. But, what does Parrots do to understand if a sequence has been changed? Well, it seems quite simple. If we launch our unit test after it has been already recorded, that's what happens behind the scenes (this is a conceptual description, the true code sequence might be slightly different):

  1. When Parrots intercepts the first call to whatever intercepted dependency, it retrieves the corresponding first recorded call in the available sequence.
  2. For that call, he checks if the method names from the current call and from the recorded call are equal.
  3. If they are, it does the same for every input parameter.
  4. If they all pass the test, it assigns the recorded answer and any output parameter to the current call context, avoiding any call to the true dependency, and then it lets the test go on.

These steps are repeated for every other call in the current sequence, breaking this loop if a difference in any step is found. What we did not discuss yet is how Parrots understands when the method name and the parameters "are equal". For the method name, it is an easy job: we just compare method name strings for equality, and we are done. But when we have to deal with input parameters, things get harder, because the "equality semantics" for each parameter might be a problem. Let's see an example:

C#
[Test]
public void GraphTestWithEqualityFunctor() { 
    var imitation = Trainer.Aviary 
        .AddParrotFor<IAmSomeDependency>()
        .Finally.RecordOrReplay(); 
        
    var sut = new ToBeTested(imitation.Resolve(), imitation.Resolve()); 
    
    var head = new GraphHead("head"); 
    
    var output = sut.Echo(head); 
    
    Assert.AreEqual("head", output.Text); 
    
    imitation.ReleaseParrots();
}

Inside the Echo() method, our SUT calls another method on IAmSomeDependency interface, which happens to have this trivial interface:

C#
GraphHead Echo(GraphHead head) { ... }

So, checking the sequence call, it will have no problems in checking for the method name equality, but then it will have to do the same on the head parameter, which is of type GraphHead (this is a sample class type you will find in the unit tests source code). Now, what if GraphHead class does NOT overrides Equals() method? By default, we get "reference equality" semantics, and source and target memory addresses will be compared, and during the replay phase they will be different, making Parrots fail. This is not the right place to discuss about equality semantics, let's just suppose that we are in a scenario where we cannot change Equals() on our type, for whatever reason. How can we fix Parrots?

Behaviors

Parrots has a feature called behaviors, which allows us to drive how Parrots must behave when facing our dependencies, and in particular when dealing with methods calls towards them. Let's see how we can change our unit test:

C#
[Test]
public void GraphTestWithEqualityFunctor() { 
    var imitation = Trainer.Aviary 
        .AddParrotFor<IAmSomeDependency>(
            d => d.Echo(Understands.Equality((left, right) => left.Text == right.Text)))
        .Finally.RecordOrReplay(); 
        
    var sut = new ToBeTested(imitation.Resolve(), imitation.Resolve()); 
        
    var head = new GraphHead("head"); 
        
    var output = sut.Echo(head); 
        
    Assert.AreEqual("head", output.Text); 
        
    imitation.ReleaseParrots(); 
}

The only difference is at this line:

C#
d => d.Echo(Understands.Equality((left, right) => left.Text == right.Text))

The AddParrotFor() accepts a params array of lambda expressions of type Expression<Action<T><action<t>>, where T is closed to the type passed to AddParrotFor() method. This way, we can pass a list of expressions that Parrots will inspect to understand how we want it to behave for each specified method. These expressions will never be called, they will just be analyzed in order to find hints about what to do in different moments of the call sequence processing.

Back to our equality problem, we can see that the first (and only) argument to the Echo() method is a special expression built up through a static service class called Understands, which exposes several methods with the only goal to be "called" inside behavior expressions. The Understands.Equality call allows us to specify that Parrots, when comparing two instances of GraphHead type, will have to use the anonymous method we are specifying as the Equality() method argument. This way we are able to supply necessary equality semantics for each test, making them go green accordingly to what 'equality' means in the context of our test. Equality() methods also support supplying any type which implements IEqualityComparer with a default constructor. Behaviors are an interesting feature we can use to tweak other details, they are still under development, but through them you will already be able to do things like, for example, declaring that certain arguments should just be skipped during call sequence equality checks. If your dependencies must be supplied with arguments like DateTime.Now, which will never be the same at every test execution, or if they need references to “service instances” which do not expose meaningful identity or state (like, for example, an implementation of IFormatter interface likely would), with Understands.Any() you will have the right tool to tell Parrots to just skip them. Understands.Any() and similar methods are already available, while other related methods will be released soon.

Public Representation

Equality semantics can be a quite hard topic to manage, and if you start doing parrots for dependencies which use parameters or return values which don’t manage equality correctly, you may find unexpected test failures. In such cases, it would be very hard for Parrots to distinguish between real failures because of changes introduced in the SUT, and failures which come from equality issues. So I decided to implement an additional feature, which is enabled by default, which is based on the public representation concept: two values of any type used as a parameter may be considered having the same “public representation” if a deep compare of their public properties ends up finding the same values. This is a sort of surrogate for equality, which is indeed arguable and might not be satisfying for many reasons, therefore Parrots allows you to disable it on every parrot calling With.PublicRepresentationEqualityDisabled() on it. If this feature is enabled, Parrots detects when equality checks fail but “public signatures” match, raising a specific and verbose exception which alerts the programmer about a potential problem related to equality, letting him decide how to manage it. In general, Parrots raises quite verbose and descriptive exceptions, trying to help detecting where the change in the SUT occurred. This is accomplished showing which dependency and call failed, which parameters were supplied and which position in the call sequence it occupies.

"True" Mocks

So far our discussion tried to illustrate the "record and replay" scenario, which is the main one, but there is more: if you change the RecordOrReplay() method to GenerateOrReplay(), Parrots will not just record the traffic, but it will also generate true Rhino Mocks code (again, this is not an hard coded dependency and it is possible that in future more mocking libraries will be supported), following the Arrange/Act/Assert pattern. Ok, this is not completely true: "AAA" should mean you can clearly read the Arrange and the Assert part in the body of the unit test along with the Act part, and that's not the case, but the Arrange and Assert parts are nicely isolated and should be easy to find, so let's keep this definition. Parrots will put the generated code in a well-known place (soon it will be configurable), and you will just have to include the generated code in your unit tests project, modifying it at your will if you think that it can be improved. This way, you would have true mocks with almost no effort. If you just let the Parrots code in place even after including the generated code, Parrots will transparently load the generated mocking code without having you to modify anything in order to “turn on” Rhino repositories!

At this point, I must be honest: the features about generating "true" mocks are far from be complete, and it is very possible that they will never be as complete as I'd like to for many reasons (the most important one: it is a difficult thing to do!). Probably the best Parrots will ever be able to do is generating complete mocks for “simple” calls and types, and useful but incomplete placeholders for more complex mocks, which will need manual interventions to become fully functional. I'll try to make my best to generate smart placeholders, but so far I'm not able to guarantee that this feature will be more than a nice helper.

Summary

Parrots is in a quite early stage, and its code still needs some cleanup and refactoring, but there are already several things the library can do and enough features to have them evaluated. You will find out more about it browsing the code and analyzing the unit tests contained in the solution, and the project will evolve in time. You can check its current status here.

License

This article, along with any associated source code and files, is licensed under The BSD License