|
Lets say I have a class called Person (Encapsulates the business layer functionality) and PersonManager (Encapsulates Data Access Layer). Here are code snippets simplified for clarity:
public class Person
{
public void Save()
{
Personmanager manager = new PersonManager();
manager.Save(this);
}
}
public class PersonManager
{
public void Save(Person p)
{
if (p.IsNew)
Insert(p) // Private method
else
{
if (P.IsOld)
Update(p) // Private Method
}
}
}
During insertion problems can occur so Insert method should deal with it and if it can not then it should throw it to the caller. Ideally, Insert should not reveal its inner working to the caller and should not break encapsulation but let the caller know something exceptional happened. The PersonManager class will also try to deal with it, and if it can not, throw it to the caller (Person) without breaking encapsulation. The Person class will follow the same rule.
My questions are:
1. Why do we append InnerExceptions as they will break encapsulation? For example, if an SqlException occurs and I attach it as inner exception, now the caller knows I am dealing with SQL database and encapsulation is broken! If I do not attach it then caller will not have sufficient enough information.
2. GUI will have a try-cach which calls person.Save() and Person will have a try-catch which calls PersonManager.Save(Person p) and Insert() and Update will also need try-catch blocks. Am I right? Is this nesting too far or is this how things should be done ideally?
3. Every class, and the method within it should try to deal with the exception and try an alternative or retry again. Is there a general rule of thumb how many times it should try and give up? Does it depend on how critical the system is?
Am I the only one who is lost because I have read many sources to obtain these answers? Please provide any useful links or book names.
|
|
|
|
|
Hi,
here is my 2c on this interesting topic; it is pragmatic rather than academic:
- each level needs to catch exceptions, and throw its own exceptions using semantics the caller will understand;
- however that would throw away all potentially useful details, as you indicated; hence the original exceptions are added as inner exceptions.
- in the end, the top level not only wants to know something failed, it also wants to be able and indicate in what direction a solution might be found. Hence an inner exception "disk full" or "server down" could be very helpful, even when breaking encapsulation.
When something goes wrong, I prefer encapsulation gets broken, rather than breaking my head over
what may possibly be wrong.
|
|
|
|
|
You know what Luc, You make a good point and I think I don't want my head broken either!
Thanks for the comments--they are very helpful. I am, although, surprised only you posted a response to such an interesting and open ended question.
|
|
|
|
|
Before I go into a discussion on proper exception management, I need to bring up a fundamental architectural issue. There are a variety of architectural styles, and usually each team or architect will choose the style they like best. However, an extensive amount of research has gone into how "effective" an architecture is in enabling developers, testers, etc. develop a deliverable product. So, I'm going to throw out one of the rules of effective design here: Isolation.
Your current design is a very dependant design. Your Person class is tightly coupled in two ways. First off, its tightly coupled to a specific concrete implementation of the PersonManager. Second, its tightly coupled to a specific persistance mechanism. Both couplings are bad, no other way to put it really. If we look at some of the most effective software development methodologies today, DDD and TDD will rise to the top. Both advocate the isolation of classes from each other, and both advocate the use of dependancy injection to improve decoupling (help achieve 'loose coupling'). Your Person entity should be simple, and should not be aware of the persistance object (PersonManager). This means Person can't save itself, so the save operation goes into a person 'service'. You would end up with something like this:
class PersonService
{
Person Load(int id)
{
PersonManager mgr = new PersonManager();
Person person = mgr.GetByID(id);
return person;
}
void Save(Person person)
{
PersonManager mgr = new PersonManager();
if (person.ID <= 0)
{
mgr.Insert(person);
}
else
{
mgr.Update(person);
}
}
}
class Person
{
int ID { get; set; }
string Name { get; set; }
}
class PersonManager
{
Person GetByID(int id)
{
}
void Update(Person person)
{
}
Person Insert(Person person)
{
}
void Delete(Person person)
{
}
}
With the above, your system is appropriately decoupled. Your exception management, along with the bulk of your business logic, and particularly interaction with persistance logic, should reside primarily in your service layer. This greatly simplifies your data access code and your entity...neigher one need to handle exceptions...they just throw them. If, for some reason, the service layer can not resolve or recover from an exception, then the exception should bubble up to your presentation layer, where you could handle it explcitly, or in most cases, just let the default handler deal with it. To keep your exception management simple in the presentation layer, you could follow a general rule of always wrapping all exceptions that bubble up from the service layer in a ServiceException. Ultimately, however, you will only have two areas where exceptions will be handled...your service layer and your presentation layer.
As for exception resolution...if, and I stress IF, you can handle the exception automatically, then the resolution strategy should also reside in your service layer. Generally speaking, exceptions indicate some broken state that is preventing a process from completing successfully...which usually requires user intervention. If you have some alternate path for accomplishing something, that alternate path should be attempted in the service, and nowhere else. But I wouldn't stick a save operation in a loop and try it 5 times before finally bubbling an exception up...thats just wasting resources, because if it failed the first time, it failed. If there is a chance it could succeed during a series of successive tries, then it sounds like you have concurrency issues that should be solved at a lower level to prevent such exceptions from ever occurring in the first place.
|
|
|
|
|
Building on Jon's answer, we have similar layers and while not necessary, here is a glimpse of what we did.
Define classes for DataAccessLayerException and BusinessLayerException and UILayerException.
Then we used the MS Patterns and Practices Exception Handling Application Block to help us handle unhandled exceptions (and by handle, I mean log, determine whether or not to rethrow, etc.) at the boundaries.
If the DAL catches an exception, it is wrapped in a DataAccessLayerException and rethrown.
The Business Layer can then catch the DataAccessLayerException and deal with it. If an Exception is thrown in the BusinessLayer then it is wrapped in a BusinessLayerException and rethrown.
The handling of different exception types can be defined within the application block configuration. In our case, the application block still only provides handling for unexpected errors. We would still have a try.. catch to internally handle errors that you can code for and recover from. The application block handles everything else and logs it the listener(s) of our choice (defined by configuration).
|
|
|
|
|
Lefty has some good suggestions, so to clarify with some code examples, here is an updated version of the snippet I posted before. I have made a couple other important changes that will further improve your decoupling and make your product more maintainable in the long run:
interface IRepository<t>
{
T GetByID(int id);
T Insert(T item);
void Update(T item);
void Delete(T item);
}
class DataAccessException: ApplicationException
{
}
class BusinessException: ApplicationException
{
}
class PresentationException: ApplicationException
{
}
class PersonService
{
public PersonService(IRepository<person> repository)
{
_repository = repository;
}
private IRepositoru<person> _repository;
Person Load(int id)
{
if (id <= 0) throw new ArgumentException("The Person ID must be greater than zero.");
try
{
Person person = _repository.GetByID(id);
return person;
}
catch (DataAccessException ex)
{
throw new BusinessException("An error occurred while accessing data.", ex);
}
catch (ArgumentNullException ex)
{
throw ex;
}
catch (ArgumentException ex)
{
throw ex;
}
catch (Exception ex)
{
throw new BusinessException("An error occurred while processing your request.", ex);
}
}
void Save(Person person)
{
if (person == null) throw new ArgumentNullException("person");
try
{
if (person.ID <= 0)
{
_repository.Insert(person);
}
else
{
_repository.Update(person);
}
}
catch (DataAccessException ex)
{
throw new BusinessException("An error occurred while updating data.", ex);
}
catch (ArgumentNullException ex)
{
throw ex;
}
catch (ArgumentException ex)
{
throw ex;
}
catch (Exception ex)
{
throw new BusinessException("An error occurred while processing your request.", ex);
}
}
}
class Person
{
int ID { get; set; }
string Name { get; set; }
}
class PersonRepository: IRepository<person>
{
Person GetByID(int id)
{
if (id <= 0) throw new ArgumentException("The ID must be greater than zero.");
try
{
}
catch (Exception ex)
{
throw new DataAccessException(
String.Format("An error occurred retrieving the requested Person: {0}", id), ex
);
}
}
void Update(Person person)
{
if (person == null) throw new ArgumentNullException(person);
try
{
}
catch (Exception ex)
{
throw new DataAccessException(
String.Format("An error occurred updating an existing Person: {0}", person.ID), ex
}
}
Person Insert(Person person)
{
if (person == null) throw new ArgumentNullException(person);
try
{
}
catch (Exception ex)
{
throw new DataAccessException(
"An error occurred inserting a new Person.", ex
);
}
}
void Delete(Person person)
{
if (person == null) throw new ArgumentNullException(person);
try
{
}
catch (Exception ex)
{
throw new DataAccessException(
String.Format("An error occurred deleting an existing Person: {0}", person.ID), ex
);
}
}
}
</t></person></person></person></t>
|
|
|
|
|
I second John's advice; the Active Record[^] pattern never sat well with me.
As an added caveat, be careful that you do not end up with an Anemic Domain Model[^]. I don't usually pass on Fowler's Nibblets of WisdomTM, but he hits the nail on the head with this one.
In short, developers will often just define a bunch of classes consisting of a bunch of properties so that they can feel like they're using Object-Oriented techniques. Don't forget that a Person should also have behaviour. (Personally, I've known plenty of misbehaving persons! : )
"we must lose precision to make significant statements about complex systems."
-deKorvin on uncertainty
|
|
|
|
|
I don't think that anyone addressed your first question, so I'll give it a shot.
In OO, encapsulation (or information hiding) concerns the way that we hide the implementation of our class behind a stable design. Thus, if I have the following class
public class Person
{
public DateTime Birthdate { get{ ... something secret ... } }
public int Age { get { ... something secret ... } }
}
then it doesn't matter to the code that uses the Person class if the value for Age gets computed from the Birthdate every time, caches the initial calculation in a private int , or uses some fancy Web service to calculate it. We are hiding the internals of the class implementation which, according to OO, everyone else doesn't care about.
Now, in terms of the System.Exception class, it has that property InnerException because, when this is set, the instance of the Exception that you catch was created by another exception which gets stored in that InnerException property. You'll note that the only type information that we have about the InnerException is that it is of type System.Exception , no leakage at all. This does not break encapsulation because, in the context of the creation of the exception that you've caught, another exception was the cause of it and knowing that exception isn't bad. You don't know how the InnerException was set, how the value gets returned to you, or what's going on in the containing exception instance.
Now, if catching code has something like the following somewhere
try
{
... exception thrown here ...
}
catch(Exception e)
{
if(null != e.InnerException)
{
if(e.InnerException is SqlException)
{
... do something SqlException specific here ...
}
}
}
then I would argue that you did not handle the SqlException deeply enough in your code or that it should not have been caught until now in a separate catch(SqlException se) block.
"we must lose precision to make significant statements about complex systems."
-deKorvin on uncertainty
|
|
|
|
|
I'd like to know the process and method of Scrum, who can help me, to provide me some documents or resouces or some kind of the introductions...Thanks.
|
|
|
|
|
Well, a google search for "scrum software development" only returns 954,000 hits so I guess its hard to find any info.
Bob
Ashfield Consultants Ltd
|
|
|
|
|
it's really a bad news, can you provide any information for me from your opinion?
|
|
|
|
|
Yes, learn to use your initiative.
Bob
Ashfield Consultants Ltd
|
|
|
|
|
I have a question about architectures that support unit testing. Say I have these objects and dependencies:
DALUser >> MappingUser >> Common.LocalCulture
DALUser >> IDataActionUser
DALUser object has dependency on IDataActionUser , whose instance is populated using dependency injection. It also has an internal dependency on a MappingUser object. The logical flow looks like this:
Business Layer object calls the DALUser object and requests a User object
DALUser calls IDataActionUser class to hit the database and return a DataTable
DALUser calls MappingUser , passing the DataTable and the MappingUser class transforms the DataTable into a User object and returns it
MappingUser depends on a static Common.LocalCulture class that does some culture ID conversions
DALUser gets the User object from MappingUser and returns it to the Business Layer, as requested
With these dependencies, I can easily mock the IDataActionUser and return a hardcoded DataTable from the mock for testing purposes.
My question is... what is the appropriate scope of a unit test? Should I create an interface for MappingUser so that I can mock it and ensure that I am testing ONLY DALUser ? What prevents you from moving towards a design where every class in your system has an interface? How do you determine where the line should be drawn between dependencies that should be mocked and dependencies that you include in a single unit test?
If I do create an interface and mock MappingUser , then my "test" of DALUser isn't really testing much at all, since DALUser only acts as a public interface to the business layer and a controller of sorts that calls these dependent objects. Without the functionality of the dependent objects included in the test, the DALUser class doesn't really do much... so it is still worth unit testing?
I'm obviously new to the unit testing game, and trying to absorb the finer points of the philosophy.
Thanks in advance for opinions.
|
|
|
|
|
Maybe it's me, or maybe because it's christmas but my brain just wont process all that information. Perhaps that's why you have no replies. It's a well thought out well constructed post but perhaps the problem requires to much immersion for text messaging a discussion.
So I will at least reply to the question in your subject line. "Is there such a thing as Too Many Interfaces?"
Yes.[^] and KISS Principle[^]
Here's the thing. Flexible software is good but requires a degree of complexity. So there is a constant struggle to find the balance between the flexibility you need and the simplicity you desire.
Leftyfarrell wrote: the DALUser class doesn't really do much... so it is still worth unit testing?
Difficult for us to know that. The benefits of unit tests and automating them are specific to the combined project/environment. That said, in general, tested stuff is good. If nothing else it can raise your level of confidence in the code allowing your mind to forget it and focus on another task. I could talk about unit tests for a while but most people have already stopped reading by now.
led mike
|
|
|
|
|
Thanks for the reply. Heh, I see your point.
Ok, maybe I can rephrase the question.
Does anyone have a general rule of thumb they use in terms of class dependency depth that is OK for unit tests? How many levels into a dependency chain is OK vs. too far in a unit test?
If I have 10 objects, chained together with dependencies... and I am writing a unit test for the parent object... can I go in 2 levels before I should create a mock and terminate the chain for testing purposes? 3 levels? 4?
Thoughts?
|
|
|
|
|
|
Its Christmas, and I'm well into the Christmas spirits at the moment, but my 4c is that a well abstracted system is generally not much more code (in terms of complexity). Add too much abstraction and you suddenly find yourself putting in a lot more effort...
Keep it fairly simple until you need the abstraction. Extracting an interface / pulling members up to a base class are pretty trivial refactoring actions. Get resharper if you haven't already
|
|
|
|
|
I am curious: Why is the business layer calling DALUser and DALUser calling an interface which hits the database and then passes the databale returned to the MappingUser?
I would design like so:
Business Layer(User object) calls DAL Interface(IDataActionUser) and passes itself in. IDataActionUser will have different implementations (Oracle, SQL, FlatFile etc) and they will load the User object passed in. No need to worry about passing DataTables back and forth. The IDataActionUser implementors can even call Common.LocalCulture for help. Now the design is more simple like so:
Business Layer >> IDataActionUser >> Implementor >> Database, Flat file, xml etc. >> Common.LocalCulture and the User object is good to go now.
I would even use the Bridge Pattern to abstract the implementor from the Business Layer.
|
|
|
|
|
The variety of objects and dependencies doesn't really make a lot of sense to me. I may just be confused because of the naming...but if I am correct, all of these objects are part of a DAL, and the business layer is not involved at all? If that is the case, it seems like you could greatly simplify:
class UserService
{
IDataMapper<User> _mapper;
public UserService(IDataMapper<User> mapper)
{
_mapper = mapper;
}
public User Load(int id) {
public void Save(User user) {
}
class User
{
int ID { get; set; }
}
interface IDataMapper<T>
{
T GetByID(int id);
T Insert(T item);
void Update(T item);
void DeletE(T item);
}
class UserDAL: IDataMapper<User>
{
}
Now, when it comes to unit testing...testing with the above model is a synch. You can easily mock away your DAL from the service, the entity is not coupled to anything, and life is bliss:
class MockUserDAL: IDataMapper<User>
{
}
[TestMethod]
public void Test_UserService_Load
{
MockUserDAL userDal = new MockUserDAL();
UserService userSvc = new UserService(userDal);
User user = userSvc.Load(1);
Assert.IsNotNull(user);
Assert.AreEqual(1, user.ID);
}
modified on Tuesday, January 6, 2009 7:38 PM
|
|
|
|
|
Thanks for your thoughts everyone. I see some common threads in the responses, so... why is our DAL so complicated?
The reason our DAL was broken up into multiple projects (Mapper, DataAction and Adapter) was because for our first DAL we took a stab at using the EntityFramework v1 in a disconnected multi-tier application.
When using the Entity Framework, the DataAction in our case would return a ModelUser object, as defined by the entity data model. We did not like the idea of passing this object (tied to the entity framework infrastructure) all the way out to our client tiers.
To remedy this, we put a facade on the outside of it (Adapter) and created a Mapper that would translate/convert the ModelUser object into a POCO (Plain Old CLR Object) EntityUser. This translation is not overly straightforward, so it seemed appropriate to split these up.
So, the EntityFramework DAL would work like this:
Business Layer calls the Adapter
Adapter calls the DataAction which returns a ModelUser
Adapter calls Mapper which takes the ModelUser and returns an EntityUser
Adapter returns EntityUser to the Business Layer
So for us, the Adapter, DataAction and Mapper were considered separate parts of the DAL, but still part of a single DAL implementation. Both the DataAction and Mapper methods might have different signatures for a EF DAL vs. ADO.NET DAL implementation.
When it came to the ADO.NET DAL, it seemed to make sense to keep the same structure to avoid confusing things.
The Business Layer references an IAdapter interface, so the whole DAL can be swapped out. As well, the DataAction implements an interface so that it could be mocked and avoid the database hit during unit test runs.
Again, this response is much longer than I'd first hoped... thanks for sticking with me to the end.
|
|
|
|
|
Aaah, the wonderful joys of Entity Framework. I was so excited when I first started playing with EF...and it turned out to be such a disaster in a multi-tier/multi-layer story. :'(
Before we really really hit "the end", one thing I wanted to mention. You DAL should be tested too. Its code, just like all the rest, and just because it hits the database doesn't mean it doesn't need testing. It sounds like you have things implementing interfaces in all the appropriate areas so that you can mock away your DAL when testing higher level stuff. But you should also set up an automated testing database so that you can unit test your DAL as well. Based on what you explained above, there is a fair amount of behavior from your Adapter on down that should be tested.
|
|
|
|
|
Yeah, we too were happy to see the EF... haha, and happy again to see it go when we shelved it (for now). We hope to pick it up again and provide a DAL implementation for it when v2 comes out.
You are right about testing the DAL. The Adapter is testable by mocking the DataAction.
The DataAction is testable by providing a connection string to a unit test database. That is where some of the complexity of my original post came from. DataAction method test hits the db, runs a sproc and returns a DataSet or DataTable or DataRow.
The Adapter is like the commander of the DAL, orchestrating the call to DataAction and Mapper.
My original question was hinting at the dependency tree depth. Our current setup has:
Adapter passes DataTable to Mapper and the mapper builds up a User entity and returns it.
Now, some of the hidden complexity within the Mapper is created by the need to support multiple languages (globalization), etc. So internally:
Mapper depends on a static Globalization class, which has a singleton dictionary (collection of supported culture info) that is used in part of the mapping. If the singleton is not populated, then it too needs to call a separate GlobalizationAdapter (to hit the db and get the collection).
It was here that I was wondering about the depth of the unit test. Because the Globalization class is static, I can only mock the GlobalizationAdapter that it uses to go to the db (and avoid a db hit during unit testing).
So when I'm unit testing UserMapping, I'm actually testing:
UserMapping >> Globalization(static) >> MockGlobalizationAdapter
Without mocking the GlobalizationAdapter, the test fails obviously. I guess my question really deals with the best way to handle unit testing classes that depend on static classes or singletons. Or should this dependency chain be re-architected in any way?
|
|
|
|
|
Unit testing and statics are always a rich topic of discussion. Ultimately, what it boils down to is whether you think testing the static Globalization type is acceptable when your actually unit testing something else. If you are unit testing UserMapping, and mocking GlobalizationAdapter, your also interaction testing Globalization. At some point, you need to interaction test, to make sure that when A uses B, the interaction of the two behave like you would expect. Sometimes you can achieve this with a mock...sometimes you need to test the interaction of two real objects. Testing has a variety of forms: unit testing, interaction testing, acceptance testing, build verification testing, etc. Unit tests will only take you so far, and you can plug the holes and double up by performing other kinds of testing.
In the case of your static Globalization class, it sounds like its a pretty simple type that acts as a lazy-loaded lookup? If its basically just a facade around a dictionary and some loading logic, I would not bother mocking it away, and just include it with your UserMapping unit tests. If Globalization is a richer class, and provides a variety of globalization services, it might be better to mock it away. You would want to unit test Globalization in isolation as well, to make sure you cover code that wouldn't be covered by interaction testing.
If you do indeed need to mock away static types, there is one product that will let you do it: TypeMock Isolator. TypeMock's Isolator lets you mock absolutely anything, statics included. Its a pretty unique mocking framework that really helps you get the job done when nothing else will. Couple caveats: 1) its not free, and 2) it requires that all testing processes be spawned from its isolator root process that enables all the advanced call interception and whatnot.
|
|
|
|
|
Hi everyone.
I wanna download html /php files from a webpage.
Do you know any method for this operation?
Thanks for your helps.
|
|
|
|
|
There seems to be more than quite a few docs out there that teach you how to do TDD from scratch, but I haven't seen much theoretical work done on "mocking out" and testing existing components from legacy apps--for example, if I have a DAL that's already plugged into my four-year old .NET 1.1 app, is there anyway I can apply post-hoc unit tests to the DAL without changing its design?
What I'm really looking for is a catalog of "mocking patterns" that give me solutions to various problems so that I can isolate my legacy components and test them without modifying the design--so my question is, has anyone managed to do this yet?
|
|
|
|
|