|
The use of the Mediator is a common technique in MVVM to provide cross-communication between ViewModels without requiring complex communication infrastructure. It's a great way to keep them decoupled.
|
|
|
|
|
Mediator use isn't strictly mandated to achieve the same thing.
Some alternatives
- Pass an interface.
- Refactor discoved common code into its own grouping.
|
|
|
|
|
Hi
I am working on a project to process three files...two CSVs and one XML. These files are being moved from a file share to a SQL Server database table using BizTalk. The xml file is being transformed into the same flat file format as the two flat files using a C# component in SSIS. Then, these flat files are processed by SSIS packages. There is a lot of business logic in the SSIS transformations. The SSIS packages also do several look-ups using linked servers. All lookups and transforms are done on a row-by-row basis (which is slow). Also, any errors that occur are put in a separate database table depending on the business object that causes the error (i.e. BusObj1_error, BusObj2_error, BusObj3_error).
Basically, I was hoping someone could suggest a better architecture that would improve performance, allow scalability and flexibility, and allow many developers works as a team on the same pieces of functionality.
E.g.
- Put validation rules in a db rather than hardcoded into SSIS.
- Instead of using different error tables, use a single error table with a errorTypeId FK to an ErrorType table.
- Migrate all transformations from SSIS C# so that multiple developers can work on different Business logic classes at same time.
Thanks
modified on Wednesday, August 3, 2011 5:28 AM
|
|
|
|
|
First steps when something is "slow".
Determine what the goal is to make it not "slow" (so are you looking for 10% or 20000% better.)
Second step is to determine why it is "slow" - specifically what parts make it slow.
Myself I would suspect SSIS and the linked sources.
Putting the raw data directly in the databae, in work tables, and then processing it there to produce the final result would probably be much faster.
|
|
|
|
|
Thanks jschell
What would you say is the ideal architecture for what I have described?
|
|
|
|
|
What I would say is that the "ideal" architecture depends on specific details that one can only learn by studying the business requirements and needs in depth.
|
|
|
|
|
F***ing ETL, whoever put the transform in ETL should be taken out and shot.
Load the data into a staging table that exactly reflects the source data, NO transforms, no data types, make all the fields varchar(###) big enough to service your data fields.
Now you have the data in a platform that is designed to manipulate data. Write a stored procedure that transforms the data from the staging table to your target table. This gives you a fast, repeatable process that you can manage in TSQL not some obscure data/row/field/column transform object in your ETL tool. If you don't have TSQL skills then hire someone who does, they are not uncommon.
BizTalk, and every other transform tool I have ever had to use are garbage, I can write a bulk copy and stored procedure process that will out perform them every time. The only tool that came close to performance equality has been Informatica that will cost you $100k+ to implement.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Thanks for the response, Mycroft.
I would tend to agree with you on using SQL tables and procs.
|
|
|
|
|
Mycroft Holmes wrote: f***ing ETL, whoever put the transform in ETL should be taken out and shot.
LOL, Tell us how you really feel Not that I disagree, the tools I've seen in action make Access loook like a rockstar (tibco and something from oracle)
Common sense is admitting there is cause and effect and that you can exert some control over what you understand.
|
|
|
|
|
Given the example of an Invoice, where you have 2 distinct areas; a header and details.
Can someone shed some light on how I should design my classes to support this ?
Here is what I do now:
1) create a base class called, InvDetails
2) create a factory class call, InvDetailsFactory.
(The factory will be responsible for reading and updating the Invoice details)
3) create a base class called, InvHeader
4) create a factory class called InvHeaderFactory (same idea as above)
Should I create a single class, Invoice, which has 2 components: Header and a collection of Details ?
Your input is welcome and appreciated.
|
|
|
|
|
To be specific an invoice is a 'report'. Where a report is a static presentation of existing data in a preset form.
A better starting point is a order entry screen which would have a create customer screen and a order entry screen.
The first lets the operator enter name, address, billing address, etc. From this you have a data model object called 'customer'
The second lets an operator pick a customer (already created) and the add order items for the customer. From this you have data model objects for 'order' and 'item' (or call it 'order item' if you prefer.)
For an invoice you would then start with a order number. That allows you to get the order and from that you get the items and customer. You also need a 'form'. The form describes the layout of the invoice itself. For example the name and address could be on the left with the order number on the right.
|
|
|
|
|
My bad for using the term, Invoice. Maybe it was confusing. Let's go with the concept of an "Order Entry" screen (web page).
My question still stands: "Should I create a single class, which has 2 components; Header and a collection of Details?" Or
Should I just create 2 separate classes, one that stores and manipulates the Header data while the other represents the details ?
Just looking for some suggestions / ideas.
|
|
|
|
|
My previous example described it specifically in terms of an order entry screen.
Precluding the orders themselves, other informatin on the screen would come from 'order', 'customer' and/or inventory (where the data model for that has not yet been defined.)
|
|
|
|
|
I build my classes to reflect the data structure
Customer may have a collection of Order(s)
Order may have a collection of OrderItem(s)
An Invoice will have a Customer
An invoice will have a collection of OrderItem(s)
Each code block is a table and also a class
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
|
Can the details exist outside of the master? If not then it is an internal class.
I would design it having a Master with the overriding details and a list of Detail objects. I'm not convinced you'll need a specialised factory for creating these items, they have relatively simple creation interfaces. I would however consider how the Master will create Detail objects
Panic, Chaos, Destruction. My work here is done.
Drink. Get drunk. Fall over - P O'H
OK, I will win to day or my name isn't Ethel Crudacre! - DD Ethel Crudacre
I cannot live by bread alone. Bacon and ketchup are needed as well. - Trollslayer
Have a bit more patience with newbies. Of course some of them act dumb - they're often *students*, for heaven's sake - Terry Pratchett
|
|
|
|
|
You have answered my question. I'm still working on other aspects of the application, so I haven't implemented my Master / Detail business objects yet.
Thanks again.
|
|
|
|
|
Hi,
I’ve been assigned a task on a subject I don’t really know... I wondered if some of you could enlighten or help me somehow.
We use ADFS to manage our application login process and one of our customers is using CAS to do the same thing and asked if we could use his tickets in our 'architecture' to authenticate users automatically.
While reading I found out I maybe could use SAML to communicate between those two servers, but it’s all I got far now. And I don't really figure out how to do so.
I don’t know if some of you already had to deal with this sort of situations, but some help would really be appreciated!
Thanks in advance!
|
|
|
|
|
First of all, I am not sure whether this is the right forum to ask this question. If not, could you please help me in choosing the right forum.
This may be a stupid idea, but I think it is better if I can develop this. I need some help to start this.
We are storing our source code in VSS which is in UK server. Inorder to work locally in India, we need to connect to VPN, then to VSS to get the latest version of the code, which is in VS.Net 2008. Now, while opening the solution and also for checkout and checkin the files , and also getting the latest version to VSS, it is taking 2.5 - 3 hrs everyday which eats most of our working time.
Is it possible to write a windows service which will connect automatically to VSS and download the source code to our local system every day. Obviously each person VSS id and pwd will be different, even though the path will be same. We can proivde this as input config.
If this is possible, please guide me how to start with this. Any articles or urls will be great.
Thanks in advance,
meeram395.
Success is the good fortune that comes from aspiration, desperation, perspiration and inspiration.
|
|
|
|
|
meeram395 wrote: We are storing our source code in VSS which is in UK server. Inorder to work locally in India, we need to connect to VPN, then to VSS to get the latest version of the code, which is in VS.Net 2008. Now, while opening the solution and also for checkout and checkin the files , and also getting the latest version to VSS, it is taking 2.5 - 3 hrs everyday which eats most of our working time.
Is it possible to write a windows service which will connect automatically to VSS and download the source code to our local system every day. Obviously each person VSS id and pwd will be different, even though the path will be same. We can proivde this as input config.
A Windows-service usually runs under a different context than the user; does it have access to SourceSafe?
I'm using batch-files to achieve something similar. We've got Tortoise, and downloading the latest code from the repository can be done over the commandline;
TortoiseProc.exe /command:update /path:"c:\stuff\code\" /closeonend:1 When done, the batch-file proceeds to build and executes the unit-tests. Gets executed every day by the Windows Task Scheduler.
SourceSafe supports the same[^].
Bastard Programmer from Hell
|
|
|
|
|
Thank you for the reply. I was actually looking for a similar kind of functionality.
I will go through the same and come back to you.
Success is the good fortune that comes from aspiration, desperation, perspiration and inspiration.
|
|
|
|
|
A better solution would be to setup a mirror server in India which gets (and sends) a backup every night to the uk server. In this case the uk-server should be the master and the India-server slave.
We used a similar principle on a previous assigment and that worked quite well. This still ensures that you have the latest sources and can use the benefits of VSS, but keeps it all structured.
|
|
|
|
|
Since a few weeks my colleague is promoted to teamlead. That fact itself does not bother me, but he does try to impose a certain architectural design (SOA based) and motivates his decisions by saying it is the Microsoft way and read book X or Y, etc...
So I did read his indicated chapters (chapter 2 eg) and found most of it common sense really. However he made a sample solution based on his knowledge and creates a new project for each aspect of the application. That means that for one webservice you have a project for the services (svc files), the service contracts (the interfaces), the service implementation (svc.cs files), datacontracts, etc... it ends up in 10 - 20 projects per application. each business layer, dal component, etc should be redone for each application for independency (the strange thing is, when I mentioned the GAC, a Microsoft way of working, the response was that that wasn't an option and didn't solve dll hell)
My problem lies in the fact that if indeed Microsoft supports such an idea (of dividing assemblies) why on Earth do they make it so hard in creating it (you start out by creating a solution with services and adding libraries as you go, but you need to actually MOVE the interfaces and implementation files to different projects. When all this is hooked up in TFS, you know that this could mess up the entire solution and above that you need to reference your projects all over the place to make it work. Personally I have a serious doubt this is a good way of working. If it really should be this way, why doesn't Visual Studio have this option or makes it so difficult to implement?
I'm for dividing logic and creating n-Tier applications, but personally I wouldn't start dividing a service project into servicecontracts, datacontracts, implementation, services assemblies.
Am I missing a point here, because this is pretty confusing.
many thanks.
V.
|
|
|
|
|
You're not the only one finding it confusing.
I don't believe they'd use 20 different projects, it'd be mad.
Having 20 projects and changing the target framework would be quite a hassle.
|
|
|
|
|
my thoughts exactly, thanks for confirming I'm not the only one who's going banana's
V.
|
|
|
|