|
The concept of Run to Completion means, at least as it pertains to classes, that a method must run to its end before another method can be called on the same object. Adhering to this rule makes it easier to reason about a class's state.
I've often seen examples of this rule being broken or just ignored. How often does a class call one of its methods in the middle of another method? I've done this many times, and I've seen code authored by others do the same thing. As long as the method being called is read-only, I don't think RTC is being violated. But when it's a write method, I've wondered if this isn't an indication of a code smell.
I was looking at Sams Teach Yourself Object Oriented Programming in 21 Days. In the book is a step-by-step example of designing an object oriented blackjack game. Here's an excerpt of part of the code from that example:
public void play(Dealer dealer)
{
while(!isBusted() && hit())
{
dealer.hit(this);
}
}
public void addCard(Card card)
{
hand.addCard(card);
notifyListeners();
}
public boolean isBusted()
{
return hand.isBusted();
}
public void hit(Player player)
{
player.addCard(cards.dealUp());
}
There's a circular nature to this design which doesn't sit well with me. But setting that aside, what I want to get at is what the player class is up to in the play method. This method passes its instance to the Dealer which in turn calls the Players addCard method. This method adds a card to the player's hand. Back in the play method, the loop checks to see if the hand is busted. It's depending on the Dealer to make a change in its state elsewhere.
I don't know; maybe there's nothing wrong with this approach. It just seems fragmented to me. The play method is depending on state changes made via another method that it expects to be called while it (the play method) is still executing.
I'm not sure how I would approach this, but I was curious as to whether anyone else sees anything wrong with this.
|
|
|
|
|
We have an application which is run as a Windows Service. This application uses a dll that implements some business logic.
The service is running fine and now we have implemented some new business logic in this dll. None of the other dependencies have changed. The application remains unchanged. My question now is:
Do we have to recompile the application and reinstall the service? My personal view would be NO. Am I right?
A quick feedback would be much appreciated.
Thanks/RB
|
|
|
|
|
1. If the service will find every function with the same parameters, he will need, in this dll, then
you do not have to recompile or reinstall the service. It should run as before.
2. If you change some code of the dll, so that you have to change the code of your service.
Its normally enough to recompile the service. You do not have to reinstall it.
Greetings
Covean
|
|
|
|
|
|
There is no need of reinstalling of window service there......
But Question regarding business components... depending upon the type of assembly... if it is deployed in GAC... there is not need to recompile the window service... else need to recompile the service....
|
|
|
|
|
My requirement is the remote application sends some 15000 to 20000 messaages to be processed and the remote application happens to be done using microsoft technology then the remote application can directly add messages to MSMQ and then using the MSMQ adapter of BizTalk the messages can be picked up for further processing.
As MSMQ is used for In-Order delivery but at the same time it does not require the other application to be online that is if the messages coming are too much as mentioned above then they can very well sit in the qeue and wait for it to be picked by the MSMQ adapter, so I guess my driver here along with the ordered delivery messages it is also the number of messages coming and processing the same.
So, I was thinking of having MSMQ, so that the remote app can send the message to the MSMQ and the Biztalk would pickup the messages using the MSMQ adapter for further processing. Please share your thoughts ... or is there a better option to handle this scenario, please do reply
|
|
|
|
|
Please do reply back of what you think about the approach or is there any better approach to cater to my requirement.
|
|
|
|
|
I am in the process of designing a typical financial software package. I have googled to try to find sample designs I could benchmark against. I count not find any. Does any one know any links that show sample designs of financial software. It might contain data flow diagrams, or uml diagrams, or sample user interfaces.
|
|
|
|
|
|
Need to define the scope of the problem... which part you are looking for....
|
|
|
|
|
Hi all.
I am writing some paperwork about Object Relational Mapping / Data Access Layer development technnoligies and tools (like Microsoft ADO/DAO, LINQ to SQL, Entity Framework, (N)Hibernate, Doctrine, Apache iBatis and other SQL / CRUD generators and mappers). I would like to know other programmers' thoughts about the subject so my work is not based only on my own subjective opinion.
If someone here has experience with ORM / DAL tools (or have created own ORM / DAL) then I would be grateful to know about your findings. I can suggest some factors that you can take as a basis:
learning curve - how hard / easy it was to get into this technology, how long did it take; was the quality of the documentation appropriate?
introducing the technology into an existing project - how long did it take, was it hard, what problems did you encounter? Where there any problems - "deal breakers" that made you choose not to use some certain ORM ?DAL development tool?
starting a new project and choosing the ORM / DAL technology - how much the ORM ?DAL dictates the rules for an architecture of the new project? where there any compromises needed just to adapt your project architecture to the ORM / DAL technology?
porting some project to another data base / working environment - how did ORM / DAL tools help or create additional issues?
flexibility - if project specifications changed, how the ORM / DAL reacted? Was there any need to throw away existing ORM / DAL and choose another or create your own layer from scratch?
And considering all the above, what would be your ideal ORM / DAL tool, what features should it have and what should it avoid? Do you prefer high abstraction from SQL or maybe a tool that generates modifiable SQL / CRUD that you can tweak later and also the changes do not get lost when you use the tool to regenerate something?
And other useful ideas about this topic are welcome.
Thanks.
|
|
|
|
|
anybody?
|
|
|
|
|
LINQ 2 SQL, AWESOME for MS SQL small to medium size projects. Easy to use and implement.
Entity Framework - Pain in the arse, 100%.
Lightspeed - Commercial solution but VERY awesome. More high-powered than LINQ 2 SQL, with more flexibility (+ compatible w/ other DB engines).
NHibernate - Too much XML hand editing, makes it slow to use and to modify sometimes.
|
|
|
|
|
Hi,
We are developing a .NET C# application and we also generate help files to our application. At the moment we are using Microsoft Html help format, .chm files. But the .chm help files format has some disadvantages so we are investigating if we shall continue to generate help files in this format or not.
What is the best format for help files?
- .chm
- html
- ......
(I hope this is the right forum for this kind of question.)
Regards
Marcus
|
|
|
|
|
I was reading the Enterprise solution patterns using Microsoft.net. I have a question while reading on Three Layered application, so just think of sharing with any of you and confirm.
It is said that "Make sure you eliminate the dependencies between data access components and business layer components. Either eliminate dependencies between the business layer and the presentation layer or manage the dependencies here using the Observer pattern".
My question is how can the business layer be independent from data access layer? Why because, for anything to be processed in business layer, the data needs to come from data layer. For eg: If I need to calculare employee PF, I should get the employee details from db, which is in data access layer. So there will be a function in businesslayer, which calls the corresponding function in data access layer. Is this not dependent or am I thinking stupidly? Or what exactly means that business layer should be independent from data access layer?
Same as the case with business layer and presentation layer. There will be some business logic implemented with the business layer class. So inorer to perform that, we are calling that function on the happening of some events in presentation layer. I did not understand what exactly meant by this independent? Am I thinking wrong?
Success is the good fortune that comes from aspiration, desperation, perspiration and inspiration.
|
|
|
|
|
The simplest way to think of it is "I don't care how you do it... Just do it!"
You go to the grocery store, pick out a few things, and go to the register to pay for it. The cashier hits some buttons on the register, tells you how much it costs, takes your money, and gives you change.
Now, you don't know how to work the register (Or maybe you do, if you've had a job like that, but the point is that you don't have to)... You just know that somehow the cashier is adding everything up and figuring out how much you owe.
On the same note, the cashier doesn't know how the register works inside. He/she just knows how to push the buttons, scan the items, and get the right total. Maybe there's a system inside where it's notifying the stock room that product X needs to be resupplied. Maybe it's building up a database of how often each product is purchased... All the cashier knows is that they push the buttons and get the response they need.
"Layers" in software are much the same way... The point is that all each layer knows is how to get what it needs from the layer below it. The presentation layer has no idea where all those employee details are coming from... It just knows how to take that data and make it look pretty on the screen. The business layer doesn't know how the data is being stored, or what the tables and fields are called... It just knows how to say "Give me all the details for employee 12345."
So the short answer is that each layer is dependent on the other one doing its job, but not dependent on how that job is done. Like I said above, each layer looks to the one beneath it and says, "I don't care how you do it... Just do it!"
|
|
|
|
|
Assume there's a rule: nobody may have a salary higher then the CEO. Where to put this rule in the system?
When you try to answer this question you will see that each choice (put it in the presentation layer, bussiness layer, data layer) has it advantages and disadvantages. Now repeat the question for the rule 'every order must have at least one line'. Would you put it in the same layer as the previous rule? Why or why not?
I hope that you will see that there are categories of rules: some are better in the data layer, some better in the bussiness layer,... This is what MS is talking about...
Rozis
|
|
|
|
|
Apart from not knowing exactly how a lower layer does something, "removing dependency" could also be considered as "not knowing WHO is doing what I need to do", and that is acomplished using Interfaces.
So you go from not only not knowing how, but neither knowing which class is the one that implemented the interface and is executing whatever you need from the lower layer.
Why would you do something like this for a DAL? Two main reasons, first not to depend 100% on an implementation for a specific database engine (ie SQL Server), but to have an interface that could be implemented for different storage mechanisms; and second for testing purpose! Imagine you could automate your tests by implementing a piece of your DAL (interface) from a dummy class (either mock or stub) just to test your upper layers (probably business).
|
|
|
|
|
Guys, I felt this was very appropriate for us in here. We used to have less than flattering terms for this guy (i.e. squatter) but Joel Spolsky calls him the "Duct Tape Programmer".
http://www.joelonsoftware.com/items/2009/09/23.html[^]
Fear not my insanity, fear the mind it protects.
|
|
|
|
|
Discussed at some length here[^].
"WPF has many lovers. It's a veritable porn star!" - Josh Smith As Braveheart once said, "You can take our freedom but you'll never take our Hobnobs!" - Martin Hughes.
My blog | My articles | MoXAML PowerToys | Onyx
|
|
|
|
|
you have no friggin’ idea what this frigtard is talking about, but he just won’t go away
Well said by Joel. I've worked with these types of individuals in the past and can relate.
"The clue train passed his station without stopping." - John Simmons / outlaw programmer
"Real programmers just throw a bunch of 1s and 0s at the computer to see what sticks" - Pete O'Hanlon
"Not only do you continue to babble nonsense, you can't even correctly remember the nonsense you babbled just minutes ago." - Rob Graham
|
|
|
|
|
I need to design a system which will process hundreds of source files (different format) and convert to one target files.
There should be two interfaces
1. Command line
2. web user interface
Command line interface is to run the transform job though batch and web user interface is to define the format of source file and mapping details of source file to target file i.e. one time job for each source file.
All the source files are fixed width or delimited files.
What is the correct approach?
Should I create one stage table for each source file to the stage data?
Should I create one stage table on runtime when user will define of layout (Most are fixed width mainframe files) of source file?
Should I create only one generic table, containing around 100 columns all varchar type?
I am looking for the best approach to design the system. Performance is very critical for this app. There are hundreds of files and we need to transform all the files daily within certain time.
Thanks in Advance
Akshay
Lucky akky
keep smiling
|
|
|
|
|
I've built many of these and not once have I used a command line interface. I use a service to do timed and repeated processing.
Generally I use a separate staging file/database for each source file/set. I find you can usually group the files into sets which have the same data structure. I also use stored procs to do the processing from the staging tables to the final data table, this may be frowned upon as it is less flexible than a full ETL tool but I find it suits my style.
I ended up with a winforms app that allows the user to configure a file for loading, defining the title and data rows, the delimiter or column widths and create a staging table with varchar fields so I can either BCP or bulk copy into the staging table. I then assign 1 of about 8 procedures to process to the final data table.
BCP in 2005/8 is more fragile than 2000 so I use bulk copy a lot, slower but more robust.
|
|
|
|
|
Create run time stage table depending upon the format of targetting file.....
|
|
|
|
|
Hello Good people,am not so sure if this should go here,i am student who is interested in developing a search engine that indexes pages from my country.I have been doing my research on Algorithm to use for sometime now and i have found HITS and PageRank as the best out there.I have choosing to go with PageRank since it is more stable than the HITS Algorithm(so i read).
I have found countless articles and university researches about PageRank but my problem is that i do not understand most of the mathematical symbols that form the algorithm in this papers.Currently,i cannot understand how the Google Matrix(the irreducible,stochastic matrix) was calculated with the algorithm,i do not seem to understand the Algorithm used.
I did my reading from the articles below:
PDF 1
PDF 2
Please i need you to help me go through it,i need a basic explanation(examples will be nice) with less mathematical symbols.
Thanks in advance.
|
|
|
|