|
No worries. I should have been a little more clear.
"The clue train passed his station without stopping." - John Simmons / outlaw programmer
"Real programmers just throw a bunch of 1s and 0s at the computer to see what sticks" - Pete O'Hanlon
"Not only do you continue to babble nonsense, you can't even correctly remember the nonsense you babbled just minutes ago." - Rob Graham
|
|
|
|
|
Tommy4U wrote: I don't think I will be able to afford to pay for using .NET products.
Huh?
Tommy4U wrote: can I install my POS system on any computer and download the free downloaded SQL 2005 express on those computers?
Yes, you can. Don't see why not.
"The clue train passed his station without stopping." - John Simmons / outlaw programmer
"Real programmers just throw a bunch of 1s and 0s at the computer to see what sticks" - Pete O'Hanlon
"Not only do you continue to babble nonsense, you can't even correctly remember the nonsense you babbled just minutes ago." - Rob Graham
|
|
|
|
|
Consider the following simplified case where I store several users at once.
Transaction trans = cn.begintrans
foreach (name in nameArray) {
Person p = new Person(name);
Person.Insert()
}
trans.commit();
I want Person.Insert() to:
- Use the transaction created. (I can't access the transaction object inside the insert() call)
This way, the users are not created when the BL produces an error.
- Still make it reusable in a non transactonal context (such as creating a single user)
I could do it with a using statment using a transactionscope instead, but actually I am wondering how it was done before .NET 2.0?
How can I make the insert method available in transactional and none transactional context without violating three tier principles?
|
|
|
|
|
I would just overload the insert() method so that, if you pass in a transaction object, it uses it.
|
|
|
|
|
That is an option, but ultimately I'm looking for a way to create business logic methods which are reusable in other BL methods in a transactional context without too much hassle. Writing two versions of each method (one with transactional support and one without) seems hard to maintain.
I find it weird that only "recently" (using transactionscope) this is possible without repetitive code.
|
|
|
|
|
You just created a Person object inside the foreach loop after starting a transaction. Why should the person know about the transaction if you are not providing it? Joe is right below, somehow you have to make the person aware of the transaction. In other words, how is that instantiation any different than if you did it before beginning transaction? How is this a three tier architecture?
|
|
|
|
|
Thank you for your reply. The sample I provided serves as an example a for what I'm trying to achieve, I don't consider it "correct".
You are right about the sample not being three tier also. I'm only looking for a way to make a method reusable in both a transactional and a non transactional context (such as insertPerson) without writing two versions of it...if that is possible?
You might write this method just to insert a person into the db and later on you notice that you need to reuse this method in a transaction elsewhere in your application. (for example to insert multiple persons at once) That would basicly mean you need to completely rewrite this method using a transaction for this purpose only.
On the other hand, you can use a transactionscope in .NET which automatically attaches the active transaction to any ADO.NET database call. This way you don't have to rewrite to insertPerson method to be used in transactions.
I'm just wondering what others do in such a situation? Rewrite any method they need to reuse in a transaction with another method signature (e.g. adding a transaction argument like you said.)...that just seems a bit redundant to me.
|
|
|
|
|
Here is what I would do in such a case:
Person class will have a method called Save(). When this method is called it will call the DAL's Save method and pass itself in. DAL will ask the person if it should be inserted or updated--no need to create separate methods for update and insert but you can. If it is to be inserted then DAL should call its own private method and insert the person and do the same if update.
For multiple persons, I would create a PersonCollection class and when Save() is called it will call the PersonCollectionDAL and pass itself in. The PersonCollectionDAL can either ask each person to Save() and pass a transaction over to it. Make this Save() method internal so it is only accessible from DAL Layer not UI Layer. Alternatively, the PersonCollectionDAL can throw all the persons in a DataTable and ask a dataAdapter to insert them. There is no right and wrong way, it is about software craftsmanship.
If you are worried about reusing the insert methods code you mentioned above, then simply make the code a private method and the two public methods (Single and Transactional) can both use it.
Obviously, this is just a suggestion and when you start your implementation you will have other obstacles to overcome.
Finally, although transactions can be used to speed things up for multiple inserts, yet I do not like using them because if one person is not inserted then why should the other ones roll back. Transactions are supposed to be for steps which depend on each other and either all pass or all fail. It is pretty obvious that is not the case here.
|
|
|
|
|
|
Hi all,
I am looking for precious suggesstions about implementing SOA using .Net framework 3.0/3.5 in Enterprise environment.
I am considering using MVC, EntLib, Plug in based architecture etc.
Thanking you in anticipation.
-muneeb
A thing of beauty is the joy forever.
|
|
|
|
|
I think you've drinking a bit too much of the buzzword juice. In any case, this "question" is much too vague and general to generate any useful answers. There are several good books on the general topic of implementing SOA with .Net.
This one by Juval Lowy[^] is quite good.
|
|
|
|
|
I'm used to doing waterfall design and development. For my next project, we're going use agile methodology. Well, it's not really agile. The only part of agile they want us to do is to do iterative releases and have daily stand-up meetings. I estimated the project to be 1,900 hours if we use waterful (which we won't, but I did it waterful because that's what I'm use to).
Now, I want to convert that waterfall estimate into an agile estimate. Is there any rule of thumb or simple formula I can use?
I'm thinking that agile should take longer because of the iterative releases, in that every release requires a new round of testing and deployments.
Any thoughts?
|
|
|
|
|
emunews wrote: I'm thinking that agile should take longer because of the iterative releases
That's not the way it normally works. Because you do smaller, faster releases you should aim at producing better quality code up front. It's a discipline that you have to work at, but it pays dividends.
|
|
|
|
|
Yes, I understand that it proports to produce better code. But I'm not really asking about code quality. I'm asking about project estimation.
EDIT: Also, note that this is not a true agile approach. There won't be pair programming for example. The two agile tennants they want to use are interative releases and stand-up meetings. That's it. The business rules for the entire scope of the project are already done.
|
|
|
|
|
emunews wrote: The two agile tennants they want to use are interative releases
Well if you don't do the iterative part correctly ( this gets bastardized into total chaos in many cases ) you will wind up in a giant out of control mess. Therefore no estimating technique has any chance of being accurate.
led mike
|
|
|
|
|
emunews wrote: I'm asking about project estimation.
The quality of what you do upfront has a huge impact on your project estimation. Traditional waterfall has to cope with the testing phase occuring fairly late on in the development process, which means that your test teams come in late on, have a much larger surface area of code to test, and defect correction takes a heckuva lot longer.
Here's a hint - work out the high level use cases of what your project needs to do. Then, break these down into scope areas. This will give you a much better idea of how long it's going to take based on an iterative approach where you normally have a week of realisation, 2 weeks of indepth coding and a tidy up week. This level of scoping really does help - we made the leap and we haven't looked back.
BTW - standup meetings. Yuck. Try to introduce them to the concept of getting all the interested parties together on a regular basis. Realisation phase - a meeting to talk about the scope of the current iteration, and then a meeting later on in the week to show the high level use case along with what you've realised for it (typically this is 80% complete). This meeting would involve the developers, any architects you need, the testers and an end-user "champion". Then, at regular intervals over the development part of the iteration, have informal get-togethers where you show what's been done, and identify what's left to do. I'll guarantee that stuff will fall out of scope, but you'll carry it over into the next phase.
|
|
|
|
|
Don't you do a round of testing for each new release?
|
|
|
|
|
Yes. But the point is, you do it at the end of a release, and the new phase of development starts at the same time. Defects get rolled up into the next phase, so you don't have that time consuming round of testing at the very end. Tests are smaller, more focused, and cheaper to fix because they are caught earlier on.
|
|
|
|
|
Well, that means that time (measured as start of project to end) is quicker but time (measured in man-hours) is longer. Correct?
|
|
|
|
|
Actually, we've found that man-hours cost is actually lower because we don't have developers sitting around at the end of a project waiting to fix bugs, and the test teams aren't wasting time up front.
|
|
|
|
|
emunews wrote: every release requires a new round of testing and deployments
The sooner you find bug, the cheaper the project becomes.
Agile projects become expensive when new requirements are brought up between iterations because more attention is being paid to the working of the program. The final quality is better.
So the main issue is to keep track of changes in requirements during the iterations and make sure those changes are estimated and added to your budget.
|
|
|
|
|
I would give the same estimate for a given project regardless of methodology. When I give an estimate, I am essentially saying "I think this will take N days if done using reasonable development techniques." These do not have to be the exact development techniques used previously. Methodologies evolve, and what you're describing is the kind of incremental adjustment in methodology which is assumed to be going on all the time in any good shop.
Hopefully you will get faster over time, and the estimating model should always be updated as discrepancies are observed, but there's no immediate need that I perceive for you to adjust it right now.
Also, I do not think I have ever seen or heard anyone claiming to use the "waterfall method." It's considered a perjorative term these days, almost like saying "my coding style is spaghetti" or "our team's style is garage hacker." When you say "my estimating technique works for waterfall" you're basically saying it's an estimating model for bad techniques.
Finally, I think Agile is much better than waterfall, or (as proponents of Agile might say) it's much better than BDUF (Big Design Up Front). I don't think there's much value anymore to the style (call it waterfall, BDUF, or just mid-90s orthodoxy) in which the architect types spend weeks or months dicking around with object hierarchies, UML, etc. before coding ever starts. That time almost always ends up wasted, in my experience. In the absence of code, the architects don't have any real basis for their decisions.
Programming instructors are quite wise when they implore us to use natural language, pencil and paper, diagrams, etc., but I think many of us in the 90s went too far in this direction. Also, I think people attempted to over-formalize good technique. What emerged from this effort was a bunch of simplistic, canned methodologies that isolated "design" into its own step at the beginning of the process, performed by an elite cadre of non-programmers. Hopefully we have left, or are leaving, this era!
modified on Thursday, December 4, 2008 4:57 PM
|
|
|
|
|
Hi,
We have an application coded in C++ that runs on Windows. We also have an API that can be used by third party Unix apps. So the current architecture (in simplified form) is:
UI (in VC++) --> Functionality Dll (in C++)
Third party Unix client --> API --> Functionality Dll (C++ code recompiled as shared object in Unix)
We are now planning to redesign the UI in C#.Net. The question before us is how do we maintain the code base (of the Functionality Dll) common to both Unix API and Windows UI? If we just recompile the func dll with /clr switch and use it in .NET, will there be any loss in performance for the main app(func dll has a lot of math calculations involved)?
Guys, please help. Hope I was clear. Thank you in advance.
|
|
|
|
|
|
Mika Wendelius wrote: Not sure if I understood your problem correctly
Me either. However, I suspect he is looking for a two word book report on War and Piece.
led mike
|
|
|
|