|
Well, that means that time (measured as start of project to end) is quicker but time (measured in man-hours) is longer. Correct?
|
|
|
|
|
Actually, we've found that man-hours cost is actually lower because we don't have developers sitting around at the end of a project waiting to fix bugs, and the test teams aren't wasting time up front.
|
|
|
|
|
emunews wrote: every release requires a new round of testing and deployments
The sooner you find bug, the cheaper the project becomes.
Agile projects become expensive when new requirements are brought up between iterations because more attention is being paid to the working of the program. The final quality is better.
So the main issue is to keep track of changes in requirements during the iterations and make sure those changes are estimated and added to your budget.
|
|
|
|
|
I would give the same estimate for a given project regardless of methodology. When I give an estimate, I am essentially saying "I think this will take N days if done using reasonable development techniques." These do not have to be the exact development techniques used previously. Methodologies evolve, and what you're describing is the kind of incremental adjustment in methodology which is assumed to be going on all the time in any good shop.
Hopefully you will get faster over time, and the estimating model should always be updated as discrepancies are observed, but there's no immediate need that I perceive for you to adjust it right now.
Also, I do not think I have ever seen or heard anyone claiming to use the "waterfall method." It's considered a perjorative term these days, almost like saying "my coding style is spaghetti" or "our team's style is garage hacker." When you say "my estimating technique works for waterfall" you're basically saying it's an estimating model for bad techniques.
Finally, I think Agile is much better than waterfall, or (as proponents of Agile might say) it's much better than BDUF (Big Design Up Front). I don't think there's much value anymore to the style (call it waterfall, BDUF, or just mid-90s orthodoxy) in which the architect types spend weeks or months dicking around with object hierarchies, UML, etc. before coding ever starts. That time almost always ends up wasted, in my experience. In the absence of code, the architects don't have any real basis for their decisions.
Programming instructors are quite wise when they implore us to use natural language, pencil and paper, diagrams, etc., but I think many of us in the 90s went too far in this direction. Also, I think people attempted to over-formalize good technique. What emerged from this effort was a bunch of simplistic, canned methodologies that isolated "design" into its own step at the beginning of the process, performed by an elite cadre of non-programmers. Hopefully we have left, or are leaving, this era!
modified on Thursday, December 4, 2008 4:57 PM
|
|
|
|
|
Hi,
We have an application coded in C++ that runs on Windows. We also have an API that can be used by third party Unix apps. So the current architecture (in simplified form) is:
UI (in VC++) --> Functionality Dll (in C++)
Third party Unix client --> API --> Functionality Dll (C++ code recompiled as shared object in Unix)
We are now planning to redesign the UI in C#.Net. The question before us is how do we maintain the code base (of the Functionality Dll) common to both Unix API and Windows UI? If we just recompile the func dll with /clr switch and use it in .NET, will there be any loss in performance for the main app(func dll has a lot of math calculations involved)?
Guys, please help. Hope I was clear. Thank you in advance.
|
|
|
|
|
|
Mika Wendelius wrote: Not sure if I understood your problem correctly
Me either. However, I suspect he is looking for a two word book report on War and Piece.
led mike
|
|
|
|
|
That's an excellent interpretation, why didn't I figure it out
The need to optimize rises from a bad design.
My articles[ ^]
|
|
|
|
|
Sorry for confusing you guys
As you must have guessed by now, I m new to the interoperability stuff . The questions on my mind are: does compiling existing C++ code with /CLR switch automatically emit MSIL for all the unmanaged code written? Will we get to keep the existing C++ code same across Unix and .Net(Windows), just like the way it is now (just needing recompilation)? If yes, is there any performance loss to it?
I know there is a chance that I am still not clear, but at least I added one more question to what I already asked. Perhaps you can have a clue where I m leading/misleading myself.
|
|
|
|
|
Btw, Mika, Mono seems a good option. But the API + func dll we have run in IBM AIX, SCO UnixWare, HP HP-UX, Sun Solaris apart from Linux. I dont find any mention of these on the Mono website.
|
|
|
|
|
|
|
i have one questions about Builder Pattern ..why we need product class where director class construct the component of product and when i want to produce any product i will implement IBuilder interface and write the behavior of each component and pass the concrete Builder to director without need the product and i will produce also different representation , plz give me the benefit of product class and real world example if u can ?
Discover Other ....
http://www.islamHouse.com
|
|
|
|
|
Builder pattern consists of 4 parts:
1. Builder-This is an abstract class
2. ConcreteBuilder
3. Director-This is the site where the construction occurs by calling the ConcreteBuilder
4. Product-This is the thing being built
public class Demo
{
public static void main(String args[])
{
Builder b = new ConcreteBuilder1();
Director d = new Director();
d.construct(b); // We tell director to construct a product using b as the builder and director
// will call the appropriate methods on b. We do not need to know how the director
// will do this, we assume the director knows this.
Product aProduct = b.GetProduct(); // Now we ask b to give us the built product
b = new ConcreteBuilder2();
d = new Director();
d.construct(b);
Product anotherProduct = b.GetProduct());
}
}
Therefore, you need the product class because after all you are after the built product at the end.
It is like going to a construction site and taking a builder with you. You ask the person in charge at the construction site (Director) to construct a house for you using the builder you introduced to the person in charge. The person in charge should know the sequence and what parts are needed to build a house but does not know how to build it. He simply asks the builder to build the parts. The builder now has a complete house. To see the completed house, you ask the builder for the house. I personally think we should ask the Director for the finished product as well and the director should ask the appropriate builder, but for some reason that is not the case.
|
|
|
|
|
Ok so I am looking into the 3.5 workflow services and developing support for some application form workflows. So far I have only come across fairly simplistic tutorials, the best being on channel9. I can understand that, there seems to be an astonishing level of complexity involved in putting one of these together.
My intentions are to have ASPX forms deployed via sharepoint services. The forms are to be served by a workflow service and data stored on SQL Server.
Basically a form to be filled, various levels of sign off with a reject or cancel option where reject allows the originator to resubmit. There may be substantial delays during the workflow. From my investigations so far it seems a state machine workflow will be the best solution but a sequential workflow may meet most requirements.
Issues are
Which workflow to use?
It seems all interaction needs to go through a send/receive event and therefore serialised datasets will be passed through the service.
Should the supporting data for the form (tables for dropdowns) also pass through the workflow service or should I seperate the static data from the workflow data?
So far the tutorials seem to assume the data is discarded once the workflow is completed (stored as xml during the workflow) but I would like to retain the information for analysis. Should the xml data be parsed out to tables etc for easier query access.
Am I missing anything from the following list
Persistence data store - SQL Server
Form information and static data - SQL Server
Logic layer to interact with the form information database (Program.cs)
Contract/Interface per application form
Workflow Service
Client (ASPX)
Also if there is a tutorial/article out there related to this type of workflow I would appreciate a link.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
A common task we always have to accomplish is to allow users to edit objects from the UI. For example, lets say I have an employee class with the regular properties. The UI will have textboxes and other controls to display the properties. A user can select an employee by name from a listbox, treeview, combobox or whatever (not important to this question) and the form will display the employee. These are different approaches I have been taking to display the employee and I was wondering which is a better approach or if you can recommend one:
1. Create a property within the employeeForm called Employee and when it is set the form will call its private ReLoad() method and display the employee.
2. Create a property as mentioned in 1, but do not ReLoad in the setter, instead make the ReLoad method public and clients should call the ReLoad method after setting the employee.
3. Forget the property altogether and just have a method called Load(Employee emp). This is basically a method which takes employee as a parameter and then displays it.
What do you think?
|
|
|
|
|
|
I am very familiar with MVC and use it all the time.
|
|
|
|
|
CodingYoshi wrote: I am very familiar with MVC and use it all the time.
Really? How are any of the options you posted in your first post, part of the MVC design? I don't see it.
led mike
|
|
|
|
|
led mike wrote: How are any of the options you posted in your first post, part of the MVC design?
I never said my options had anything to do with MVC pattern. I use MVC pattern but in this case my question is not about MVC. My question is closely related to the observer pattern. You have an object of a class which is being observed by the form by subscribing to its events. Now when the user selects a different object from a treeview (or whatever), this class will become the new object the form is observing and the form should display its data. But the form can only do so once the object fires the event. But the object will fire the event when its state changes so the form has to wait until the user changes the state of the object. So now we are stuck because this is what has been happening so far:
First an object was selected from the treeview. The object is sent to the form. The form subscribes to its events. The events are not fired so how do we display it?
The solution I came up with is to trigger the event through a public method--just like we can trigger events in .NET by calling OnPaint. Now the form can display the data.
So I think the answer to my question is to set the property and send a reference of the object to the form. Afterwards, force the objects event by triggering it through a public method. But I am still not sure if this a good solution.
|
|
|
|
|
hi,
I've finished all the base code to my App. Tested it and everything, now I
need to make a UI.
Its not going to be overly complex but it could get complex later on, so I
want to design it well.
I want a proper way to enable/disable UI but later being able to change
maybe the real control, so far i've come up with.
interface IMainForm
{
IUIController GetUIController(string name);
}
interface IUIController
{
bool Enabled
{ get;set;}
}
The default implementation would be:
class DefUIController : IUIController
{
DefUIController(Control cntrl);
{
}
}
And then in the main form I could search for the control and create a new
UIController and return it, like this:
IUIController GetUIController(string name)
{
foreach(Control crt in this.Controls)
{
if(crt.Name== name)
{
return DefUIController(crt);
}
return null;
}
}
I would need to change the state of a control from a sub UserControl which has a reference via a property to IMainForm.
Like, if the selected Customer in the CustomerGridView UserControl doesn't have a bill yet, _within_ the usercontrol i could do this:
IUIController ui = _mainform.GetUIController("mnuPrintBill");
ui.Enabled = false;
Is this design good enough or is there a method out there, I don't need
something complex like CAB or something, its not such a big app.
Thanks so much.
Gideon
modified on Thursday, November 13, 2008 12:07 AM
|
|
|
|
|
Gideon - when I wanted automatic control of UI elements, I knocked up this[^] sample. It's a trivial example, but easily extendable into something more powerful (we did).
|
|
|
|
|
Hello,
I am not sure if it is a good suggestion or not as I have done it myself but you can use Command pattern for your application. You can use databinding to bind several ui controls to one ICommand object and then changing enabled/visible property of the ICommand will automatically enable/disable, hide/show by the magic of databinding
|
|
|
|
|
hi,
Thanks you guys, but un fortunately UI needs to be disabled based on more complicated situations. Like User permissions, and things like say if its a customer's last day of stay, stuff like that.
I will have a customer grid, so if a certain customer is on his last day, in the user control I would do this:
if(selectedReservation.EndDate == DateTime.Today)<br />
{<br />
} else {<br />
}<br />
I can't put all possibilities into an Enum, and I think databinding would be a little convoluted since some situations are more complex.
Is my design stupid? that I'm looping through the controls?
Thanks so much.
|
|
|
|
|
giddy_guitarist wrote: Is my design stupid? that I'm looping through the controls?
I don't think there is much else to do really. It just ties your code really tight to your business rules in the UI.
You may want to take a hybrid look and do both. Look at what can be done via a form based state machine then augment that with some business rule specific stuff.
|
|
|
|