|
Lets say I have this hypothetical database (the one I am dealing with is much bigger as you probably can imagine or guess):
Customers
---------
ID
Name
Address
ShippingAddress
OrderID
Orders
------
OrderID
Date
LineItems
---------
ItemID,OrderID
Quantity
Items
-----
ItemID
Description
I have many applications which need customer's ID, Name, and Address, Items' ID, Description. Is it better to create a library which can load the customer ID, Name, and Address and same for the Items by invoking a stored procedure or is it a better design to have a stored procedure which will get whatever I need for the specific application. For example, one application might need customer's id, name, and the number of orders and it might need what items we carry. However, another application might need all the columns of customer and Items. Because too many applications need customers' ID, Name and Adddress, should I create a static class class with static method which invokes a stored procedure and gets the information and if I need more information in other application I simply get those as per need basis, or should I just skip creating a static class library and get everything when I need them by invoking some stored procedure which returns all I need.
If I create a library, the advantage is I can use it in many applications and get the extra columns if I need to (but at least a few apps will not need the extra columns so simply calling the static methods will do the job). However, if I need other columns then I have to get those separately. If I do not create a library, I do not have the advantage of code reuse but I can get whatever fields I need and load them as need for the application all in one shot. I am wondering which solution is more advantageous over the other? Or, is there a better way?
Thanks,
|
|
|
|
|
Some opinions:
I think you should use stored procedures or views (which ever you prefer) to get the data. You can combine several needs to a single procedure, but I would think the procedure output as an interface (technically not the same as in .Net but the idea). If there are several needs which are almost identical and they don't change the logic of the output and there's no performance penalty, I would combine those.
About the static class. This is basically caching and it's a good way to ensure performance. However you must consider when to refresh the data in the static class. When and how are the modifications fetched to the class. You might want to use SqlDependency[^] if it's SQL Server you're using.
One thing you didn't mention at all was security. I don't know if it's an issue but is it ok that all the data can be used by clients. If not, that consideration should be part of the process when you think what procedures you'll create.
|
|
|
|
|
Security is not an issue. What I am trying to find out is:
Is it a better idea if I make a library which other developers can use by simply calling the static method which will return CustomerID, Name, and Address. The static method will return a DataTable by invoking a stored procedure. If the developer needs other fields they can worry about that themselves and retrieve them. Should I create a library or am I just wasting time doing this? The thing is many applications need these fields so I thought a library will ease things up.
Of course, the alternative is to let every developer create their own DAL and stored procedures and get whatever they need. The problem is 10 different applications may simply need CustomerID, Name, and Address and thus code will need to be written within each app's DAL. This is a waste of time. What do you think?
|
|
|
|
|
I wouldn't think that creating the library is a waste of time at all. One thing that's obvious benefit is that you get reusable code. Also considering the application maintenance you would have a single point to maintain and the benefits in performance, caching, services (methods etc), structural changes in data etc. would be usable for all. For example if later few more columns are needed by many application, simply adding them both to the procedure and to the class once would do the trick without rewriting several DAL's.
One thing I'm thinking of is that why you would take only few of the columns instead of all. If the application needs only few, you can strip the unnecessary columns away when the application requests for data or the application can do it if necessary. To go a bit further you could get some data around the customer (like number of orders or whatever) and if the app requests it you either fetch it or return if it's cached.
As said this is just an opinion but I'm used to the idea that only single DAL exists against one database. In my experience, it makes things a whole lot easier when you have to change something. Of course this DAL is cumulative so that when new tables or applications are created, the DAL grows but still it's "centralized". Although the development environment in my cases may be very different from yours.
|
|
|
|
|
Given the fact that you have a variety of usage scenarios to fill, I can't stress enough the benefits that and O/RM (Object Relational Mapper) can offer you. Building strict API's in a database with Stored Procs is a very, very maintenance heavy way to go (in most situations, but particularly so when you have a broad set of usage scenarios). I highly recommend that, even if you end up choosing not to use an O/RM, that you check out LINQ to SQL. Its simple, very lightweight, very fast (L2S queries have caused devs and dba's surprise when they find out that their 'highly optimized' query actually performs worse, sometimes much worse, than a corresponding L2S query), and offers a degree of flexability that you could find extremely useful for your needs.
Some of the benefits you can hope to gain by using an O/RM:
1) Eliminate an entire layer and API from your application. No more stored procs that need to be written or maintained, or which have to be adhered to once they are in place due to frustrating regulations regarding the database.
2) Be able to query a conceptual model directly for your entities or collections of entities.
3) Have the option to define "fetching strategies" that allow you to fetch a root entity (or collection of roots), as well as related child entities and entity collections, with a single, efficient query to the database.
4) Be able to adapt quickly to changing business requirements without having to expend a significant amount of time refactoring a (possibly extensive) SP API. Even better, don't worry about not being able to adapt to changing business requirements because of rules regarding database changes.
5) In the case of LINQ to SQL, you can even perform ad-hoc queries against your conceptual model, and materialize custom or anonymous types that provide just the data you need, retrieved with the most optimal query. This again can be gained without the need to write and manage any SPs. This would solve the problem you have, where many applications only need fragments of information from entities, and you don't want to incurr the cost of retrieving a whole entity when its not needed. L2S offers exactly what you need to allow all 10 of your applications to retrieve what they need, when they need it, efficiently, from a conceptual entity model...without ever having to worry about writing or managing stored procs.
If you want more information about the benefits of O/R mapping, feel free to drop me a line. I don't know your DBA situation...if you have them, your probably in for a fight to have the right to use an O/RM (they generate parameterized dynamic SQL). SQL Server 2005 and 2008 offer a lot for applying query plan and performance tuning side-band, so don't let your DBA's tell you off-hand that there is no way to manage or optimize dynamic SQL if your using SQL Server (tell them to look up Plan Guides).
|
|
|
|
|
Interesting. I haven't had a look at OR/M tools for several years because the last time I tried them, they just weren't capable of doing the things I needed. That's the reason I normally generate the DAL and it's surroundings based on both db objects and business entities. But I understood from your post that the situation has changed a lot over the past few years?
Another thing I noticed is that you suggested Linq to SQL. I've understood that this is a "dying" product and will be replaced by Entity Framework. Correct me if I'm on the wrong track.
I think a bit differently about stored procedures. Basically it doesn't matter what kind of DAL or mapper you have on the client side but if the number of roundtrips to the database starts to rise if the data must be fetched in parts because of the logic (for example trees, networks, cumulative calculations etc), you may end up in problems. I think this is the point when procedures, views, functions etc show their best sides.
I'd be happy to hear you comments on those issues,
MIka
|
|
|
|
|
I suggested L2S for a few reasons. First, its really simple, and easy to get into. Second, if you use SQL Server, its amazingly efficient...the SQL generated is about as efficient as it gets, and often rivals or surpasses the performance of manually written queries. Third, its fully integrated into Microsofts development tools, which lets you learn how to use modern O/RM without having to learn low-level details like xml configuration formats and the like until you really need to. Fourth, L2S has complete support for .NET 3.5 LINQ. Your developers can query anything, from any data source...collections, xml, arrays, and the database (if you use L2S) with a single, unified query language. Thats huge. You may end up needing something more powerful to fulfill your mapping needs...NHibernate or Entity Framework would definitely cover those other scenarios if you needed them (NHibernate doesn't have a very rich LINQ scenario right now...its SQL generator doesn't generate nearly as efficient of queries as L2S...EF's query generator is something of an anomaly right now.)
(Yes, it will eventually be superceded by Entity Framework...but thats a ways off. EF v1.0 was kind of a dismal failure (IMO), and v2.0 needs a LOT of work. L2S won't just dissapear, either...nothing Microsoft has ever deprecated has actually ever been removed from the .NET framework...backwards compatability and all that. L2S in its current form will be around for a long time, and according to Microsoft, it will still be maintained and improved a bit, at least for a while.)
Regarding the efficiency when working with object graphs (trees, networks, cumulative calculations, etc.), that is where L2S truely shines. You would expect it to make several round trips to retrieve a root entity and several collections of child entities. L2S has a highly intelligent expression processor that translates your queries into the most efficient SQL possible. Retrieval of object graphs usually results in a single query to the database that returns a single flat result set containing everything for the entire graph (you can configure what is retrieved up front with DataLoadOptions on your DataContext, similar to a fetch strategy for NHibernate or a .Include() sequence in EF).
When you need to retrieve aggregated data (I read cumulative calculations as aggregations...correct me if I am wrong), L2S also shines. Even though ORM's are primarily meant to support entity retrieval and update...L2S has a unique ability to retrieve arbitrary data sets when you query your conceptual model. Assuming you have a conceptual model along the lines of:
Customer
Order
OrderLine
Product
You can query for customer information AND the aggregated count of how many orders they have in the last 6 months very easily with L2S:
var anonCustWithOrderCt = from c in db.Customers
where c.LastName = "Smith"
select new
{
c.CustomerID,
c.LastName,
c.FirstName,
OrdersInLastSixMonths = c.Orders.Count(o => o.OrderDate > DateTime.Now.AddMonths(-6))
}
The query generated will be very compact and efficient. COUNT in SQL Server is given special consideration by the query execution engine, so the following result is optimal:
SELECT [t0].[CustomerID], [t0].LastName, [t0].FirstName, (
SELECT COUNT(*)
FROM [Orders] AS [t1]
WHERE ([t1].[OrderDate] > @p0) AND ([t1].[CustomerID] = [t0].[CustomerID])
) AS [OrdersInLastSixMonths]
FROM [Customers] AS [t0]
WHERE [t0].LastName = 'Smith'
-- @p0: Input DateTime (Size = 0; Prec = 0; Scale = 0)
This example is extremely simple compared to the kinds of complex queries LINQ can handle. It fully supports grouping, joining, etc. The main difference is that it is based off of a conceptual model rather than the physical database model...which, if you have a complex model or a model with odd entities, the L2S query generator may have some trouble generating the most optimal query (it may be forced to add extra joins when you wouldn't expect them, etc.).
If you want to experiment, check out LinqPAD. Its a free tool that lets you quickly connect to any SQL Server database and start running LINQ queries against it in a few minutes.
modified on Thursday, January 8, 2009 11:03 AM
|
|
|
|
|
Thanks!
That was a very good description.
Yes, you understood me correctly when I wrote about cumulative data. I did mean aggregated data. I've had fever over 38 Celsius for almost a week so the translator in my head is working only partially
I believe that L2S will understand the "basic" scenarios and optimize them using a single roundtrip. Your example for order amount was a good example and the query generated is efficient in most scenarios. If the cardinality of the main query from customer changes, SQL Server optimizer may aid to transform the query to a different format in order to maintain the performance.
What I was wondering is that I'm not aware that any relational SQL engine is capable of handling networks (data containing trees in both directions simultaneously). This is a very common situation in the systems that I use.
Also what worries me is that Linq itself isn't very dynamic. Typically I have queries that have dynamic amount of conditions (normally 1-100, but in hard cases over 1-1000). So if I want to support them, I have understood that I would have to create an enormous amount of Linq queries. This is one reason why I haven't really tried L2S yet (although I do use Linq to XML and Linq to Objects).
Your description of L2S made me so curious that I have to give it a try in the near future. I'll try the LinqPAD also.
Thanks again,
Mika
|
|
|
|
|
Glad I understood your questions.
Regarding the next few. I'll start with networks. I call a 'network' a 'graph'...actually, I call trees or networks 'object graphs'. This is actually a very good question, and its also pretty much exactly WHY we have O/RM systems. To give you a clearer idea of what an ORM is...Object/Relational Mapper. The concept behind this term is that the way we work with objects is not directly translatable to how we store and manage data. Objects are usually represented by a 'graph' (or network, if I understand you correctly) in memory. A set of objects related through pointers, with a variety of navigability (sometimes we can only go from parent to child...sometimes we can go from parent to child and child to parent). Data is usually represented as sets of tuples (many tables of rows), with relationships defined external to the tuples themselves (foreign keys defined for tables rather than tables containing direct pointers). These differences create what we call the impedance missmatch between objects and databases, and its this impedance missmatch that O/RM's are specifically trying to solve.
Object relational mappers handle the process of 'bridging the gap' for you so you don't have to worry about it. That gap is where the process of building an object graph, or network, from relational data needs to happen. L2S can handle object graphs very well, and can build an entire network of objects from a single result set queried from the datase in most situations. Sometimes a graph is too complex to be retrieved in a single query, or sometimes the missmatch between your conceptual model and the database schema is too great, and additional queries are required. Regardless of what is needed for each specific scenario, the benefit of an O/RM is that it does the gritty work of solving that problem for you. All you need to worry about, once the O/RM is in place, is querying your conceptual model (note, when you query with an O/RM, you arn't really querying the database...your querying your model). If a parent object needs a pointer to its children, and all of its children need pointers back to the parent, the O/RM sets those pointers up for you...when you get a graph result back from your O/RM, its fully constructed and all relationships are in tact.
As for LINQ being dynamic. It actually is very dynamic, but its not obvious in the first few glances how. Critical thing about LINQ is that its what we call delay-loaded, or delay-bound. When you write a LINQ statement, that statemnt is actually just setting up an enumerator. The database, or whatever it is your querying, doesn't actually get queried until you actually iterate over that enumerator. So, when you need to have dynamic conditions, adding them with LINQ is actually quite easy:
var query = from o in MyObjects select o;
if (minID > Int32.MinValue)
{
query = query.Where(o => o.ID >= minID);
}
if (maxID > Int32.MaxValue)
{
query = query.Where(o => o.ID <= maxID);
}
if (name != null && IsEqual)
{
query = query.Where(o => o.Name == name);
}
else if (name != null && ContainedWithin)
{
query = query.Where(o => o.Name.Contains(name));
}
switch (orderField)
{
case "Name": query = query.OrderBy(o => o.Name); break;
case "ID": query = query.OrderBy(o => o.ID); break;
default: query = query.OrderBy(o => o.OrderIndex); break;
}
foreach (var result in query)
{
Console.WriteLine("ID: " + result.ID + ", Name: " + result.Name);
}
|
|
|
|
|
If you are only interested in quick and dirty presentation of information, then stored procs / view might suffice.
If you are looking for a "business object" style approach, then seriously consider looking at an O/R mapping tool for your data layer. This will mean you will avoid writing and maintaining a DAL, and you'll end up with improved flexibility as a side effect. Try ours out, it will take you all of 10 mins :P
(Of course this may be seen as a biased opinion )
|
|
|
|
|
Hello,
I am looking to develop a software that could be integrated into AOL, MSN, Skype, Gmail and Yahoo messangers.
I was wondering:
1. on what platform those IM's developed?
2. on what platform should I develop my own software, that would be easy to integrate to
those IM's?
Note: my software should be executable when installing and integrating it as part of those IM's.
Thanks for your help.
|
|
|
|
|
|
Hi..
In VS 2005,Style sheet has one property named filter and opacity but in VS 2008 there is no such property.
can any one tell me how to use that property in VS 2008.
|
|
|
|
|
Sorry but how is your question related to Design and Architecture? Try Visual Studio message board.
|
|
|
|
|
Lets say I have a class called Person (Encapsulates the business layer functionality) and PersonManager (Encapsulates Data Access Layer). Here are code snippets simplified for clarity:
public class Person
{
public void Save()
{
Personmanager manager = new PersonManager();
manager.Save(this);
}
}
public class PersonManager
{
public void Save(Person p)
{
if (p.IsNew)
Insert(p) // Private method
else
{
if (P.IsOld)
Update(p) // Private Method
}
}
}
During insertion problems can occur so Insert method should deal with it and if it can not then it should throw it to the caller. Ideally, Insert should not reveal its inner working to the caller and should not break encapsulation but let the caller know something exceptional happened. The PersonManager class will also try to deal with it, and if it can not, throw it to the caller (Person) without breaking encapsulation. The Person class will follow the same rule.
My questions are:
1. Why do we append InnerExceptions as they will break encapsulation? For example, if an SqlException occurs and I attach it as inner exception, now the caller knows I am dealing with SQL database and encapsulation is broken! If I do not attach it then caller will not have sufficient enough information.
2. GUI will have a try-cach which calls person.Save() and Person will have a try-catch which calls PersonManager.Save(Person p) and Insert() and Update will also need try-catch blocks. Am I right? Is this nesting too far or is this how things should be done ideally?
3. Every class, and the method within it should try to deal with the exception and try an alternative or retry again. Is there a general rule of thumb how many times it should try and give up? Does it depend on how critical the system is?
Am I the only one who is lost because I have read many sources to obtain these answers? Please provide any useful links or book names.
|
|
|
|
|
Hi,
here is my 2c on this interesting topic; it is pragmatic rather than academic:
- each level needs to catch exceptions, and throw its own exceptions using semantics the caller will understand;
- however that would throw away all potentially useful details, as you indicated; hence the original exceptions are added as inner exceptions.
- in the end, the top level not only wants to know something failed, it also wants to be able and indicate in what direction a solution might be found. Hence an inner exception "disk full" or "server down" could be very helpful, even when breaking encapsulation.
When something goes wrong, I prefer encapsulation gets broken, rather than breaking my head over
what may possibly be wrong.
|
|
|
|
|
You know what Luc, You make a good point and I think I don't want my head broken either!
Thanks for the comments--they are very helpful. I am, although, surprised only you posted a response to such an interesting and open ended question.
|
|
|
|
|
Before I go into a discussion on proper exception management, I need to bring up a fundamental architectural issue. There are a variety of architectural styles, and usually each team or architect will choose the style they like best. However, an extensive amount of research has gone into how "effective" an architecture is in enabling developers, testers, etc. develop a deliverable product. So, I'm going to throw out one of the rules of effective design here: Isolation.
Your current design is a very dependant design. Your Person class is tightly coupled in two ways. First off, its tightly coupled to a specific concrete implementation of the PersonManager. Second, its tightly coupled to a specific persistance mechanism. Both couplings are bad, no other way to put it really. If we look at some of the most effective software development methodologies today, DDD and TDD will rise to the top. Both advocate the isolation of classes from each other, and both advocate the use of dependancy injection to improve decoupling (help achieve 'loose coupling'). Your Person entity should be simple, and should not be aware of the persistance object (PersonManager). This means Person can't save itself, so the save operation goes into a person 'service'. You would end up with something like this:
class PersonService
{
Person Load(int id)
{
PersonManager mgr = new PersonManager();
Person person = mgr.GetByID(id);
return person;
}
void Save(Person person)
{
PersonManager mgr = new PersonManager();
if (person.ID <= 0)
{
mgr.Insert(person);
}
else
{
mgr.Update(person);
}
}
}
class Person
{
int ID { get; set; }
string Name { get; set; }
}
class PersonManager
{
Person GetByID(int id)
{
}
void Update(Person person)
{
}
Person Insert(Person person)
{
}
void Delete(Person person)
{
}
}
With the above, your system is appropriately decoupled. Your exception management, along with the bulk of your business logic, and particularly interaction with persistance logic, should reside primarily in your service layer. This greatly simplifies your data access code and your entity...neigher one need to handle exceptions...they just throw them. If, for some reason, the service layer can not resolve or recover from an exception, then the exception should bubble up to your presentation layer, where you could handle it explcitly, or in most cases, just let the default handler deal with it. To keep your exception management simple in the presentation layer, you could follow a general rule of always wrapping all exceptions that bubble up from the service layer in a ServiceException. Ultimately, however, you will only have two areas where exceptions will be handled...your service layer and your presentation layer.
As for exception resolution...if, and I stress IF, you can handle the exception automatically, then the resolution strategy should also reside in your service layer. Generally speaking, exceptions indicate some broken state that is preventing a process from completing successfully...which usually requires user intervention. If you have some alternate path for accomplishing something, that alternate path should be attempted in the service, and nowhere else. But I wouldn't stick a save operation in a loop and try it 5 times before finally bubbling an exception up...thats just wasting resources, because if it failed the first time, it failed. If there is a chance it could succeed during a series of successive tries, then it sounds like you have concurrency issues that should be solved at a lower level to prevent such exceptions from ever occurring in the first place.
|
|
|
|
|
Building on Jon's answer, we have similar layers and while not necessary, here is a glimpse of what we did.
Define classes for DataAccessLayerException and BusinessLayerException and UILayerException.
Then we used the MS Patterns and Practices Exception Handling Application Block to help us handle unhandled exceptions (and by handle, I mean log, determine whether or not to rethrow, etc.) at the boundaries.
If the DAL catches an exception, it is wrapped in a DataAccessLayerException and rethrown.
The Business Layer can then catch the DataAccessLayerException and deal with it. If an Exception is thrown in the BusinessLayer then it is wrapped in a BusinessLayerException and rethrown.
The handling of different exception types can be defined within the application block configuration. In our case, the application block still only provides handling for unexpected errors. We would still have a try.. catch to internally handle errors that you can code for and recover from. The application block handles everything else and logs it the listener(s) of our choice (defined by configuration).
|
|
|
|
|
Lefty has some good suggestions, so to clarify with some code examples, here is an updated version of the snippet I posted before. I have made a couple other important changes that will further improve your decoupling and make your product more maintainable in the long run:
interface IRepository<t>
{
T GetByID(int id);
T Insert(T item);
void Update(T item);
void Delete(T item);
}
class DataAccessException: ApplicationException
{
}
class BusinessException: ApplicationException
{
}
class PresentationException: ApplicationException
{
}
class PersonService
{
public PersonService(IRepository<person> repository)
{
_repository = repository;
}
private IRepositoru<person> _repository;
Person Load(int id)
{
if (id <= 0) throw new ArgumentException("The Person ID must be greater than zero.");
try
{
Person person = _repository.GetByID(id);
return person;
}
catch (DataAccessException ex)
{
throw new BusinessException("An error occurred while accessing data.", ex);
}
catch (ArgumentNullException ex)
{
throw ex;
}
catch (ArgumentException ex)
{
throw ex;
}
catch (Exception ex)
{
throw new BusinessException("An error occurred while processing your request.", ex);
}
}
void Save(Person person)
{
if (person == null) throw new ArgumentNullException("person");
try
{
if (person.ID <= 0)
{
_repository.Insert(person);
}
else
{
_repository.Update(person);
}
}
catch (DataAccessException ex)
{
throw new BusinessException("An error occurred while updating data.", ex);
}
catch (ArgumentNullException ex)
{
throw ex;
}
catch (ArgumentException ex)
{
throw ex;
}
catch (Exception ex)
{
throw new BusinessException("An error occurred while processing your request.", ex);
}
}
}
class Person
{
int ID { get; set; }
string Name { get; set; }
}
class PersonRepository: IRepository<person>
{
Person GetByID(int id)
{
if (id <= 0) throw new ArgumentException("The ID must be greater than zero.");
try
{
}
catch (Exception ex)
{
throw new DataAccessException(
String.Format("An error occurred retrieving the requested Person: {0}", id), ex
);
}
}
void Update(Person person)
{
if (person == null) throw new ArgumentNullException(person);
try
{
}
catch (Exception ex)
{
throw new DataAccessException(
String.Format("An error occurred updating an existing Person: {0}", person.ID), ex
}
}
Person Insert(Person person)
{
if (person == null) throw new ArgumentNullException(person);
try
{
}
catch (Exception ex)
{
throw new DataAccessException(
"An error occurred inserting a new Person.", ex
);
}
}
void Delete(Person person)
{
if (person == null) throw new ArgumentNullException(person);
try
{
}
catch (Exception ex)
{
throw new DataAccessException(
String.Format("An error occurred deleting an existing Person: {0}", person.ID), ex
);
}
}
}
</t></person></person></person></t>
|
|
|
|
|
I second John's advice; the Active Record[^] pattern never sat well with me.
As an added caveat, be careful that you do not end up with an Anemic Domain Model[^]. I don't usually pass on Fowler's Nibblets of WisdomTM, but he hits the nail on the head with this one.
In short, developers will often just define a bunch of classes consisting of a bunch of properties so that they can feel like they're using Object-Oriented techniques. Don't forget that a Person should also have behaviour. (Personally, I've known plenty of misbehaving persons! : )
"we must lose precision to make significant statements about complex systems."
-deKorvin on uncertainty
|
|
|
|
|
I don't think that anyone addressed your first question, so I'll give it a shot.
In OO, encapsulation (or information hiding) concerns the way that we hide the implementation of our class behind a stable design. Thus, if I have the following class
public class Person
{
public DateTime Birthdate { get{ ... something secret ... } }
public int Age { get { ... something secret ... } }
}
then it doesn't matter to the code that uses the Person class if the value for Age gets computed from the Birthdate every time, caches the initial calculation in a private int , or uses some fancy Web service to calculate it. We are hiding the internals of the class implementation which, according to OO, everyone else doesn't care about.
Now, in terms of the System.Exception class, it has that property InnerException because, when this is set, the instance of the Exception that you catch was created by another exception which gets stored in that InnerException property. You'll note that the only type information that we have about the InnerException is that it is of type System.Exception , no leakage at all. This does not break encapsulation because, in the context of the creation of the exception that you've caught, another exception was the cause of it and knowing that exception isn't bad. You don't know how the InnerException was set, how the value gets returned to you, or what's going on in the containing exception instance.
Now, if catching code has something like the following somewhere
try
{
... exception thrown here ...
}
catch(Exception e)
{
if(null != e.InnerException)
{
if(e.InnerException is SqlException)
{
... do something SqlException specific here ...
}
}
}
then I would argue that you did not handle the SqlException deeply enough in your code or that it should not have been caught until now in a separate catch(SqlException se) block.
"we must lose precision to make significant statements about complex systems."
-deKorvin on uncertainty
|
|
|
|
|
I'd like to know the process and method of Scrum, who can help me, to provide me some documents or resouces or some kind of the introductions...Thanks.
|
|
|
|
|
Well, a google search for "scrum software development" only returns 954,000 hits so I guess its hard to find any info.
Bob
Ashfield Consultants Ltd
|
|
|
|
|
it's really a bad news, can you provide any information for me from your opinion?
|
|
|
|
|