|
Thanks Richard. Glad you agree. We have kept the Stored Procedures fairly straightfoward, so no worries on the tidy up front. Wil certainly take that on board for future development though.
Thanks.
Julian
|
|
|
|
|
julian@giant wrote: acceptable to leave any error handling to the calling code
No. Not in anything non-trivial.
I always write database layers. That layer should handle 'sql' errors. That layer should also be constructed in such a way that it is mindful of potential system errors. For example a user client application might want to tell the user that the server 'was down' by catching appropriate SQL Exceptions and determining that. For certain errors it should log information and then tell the user to 'contact an administrator'.
This layer would also be unit tested independent from the rest of the application (regardless what sort of app it is.)
|
|
|
|
|
Thanks J. The more people comment here, the more confortable I feel about our current approach. Albeit not perfect, I'm pretty sure we're in the right direction.
And yes, we're upping the Unit Tests lately, which I've banged on about loads, and 'time' is finally being properly allocated to it from the 'time' people....
Thanks
J
|
|
|
|
|
I had form a system integration between system A and system B. System A would sync a same record to system B everytime a new record was inserted into a local Database of system A via Web Services. When there is some exception or failure to cause the web services disconnected, system A does not able to sync record to system B because the web services to be consume from system B cannot be reach. Is there any best practice to cater scenario in real life like this ? The new record created in system A cannot duplicate to system B. What if I schedule a job to check any failure record sync to system B and once the web services was back to online and trigger a patching operation to patch the record in system A back to system B?
|
|
|
|
|
low1988 wrote: The new record created in system A cannot duplicate to system B
If that is a hard requirement then you cannot create the record on A unless B is available. Basically you would create it on B first and only then create it on A.
If however there are ways to insure that during creation of the record on B that you could, post process, make B unique then you could proceed with a queuing strategy which would involve the following
- Determine an algorithmic approach to making B unique. This might or might not include some post processing manual intervention.
- Create a data store on A suitable for storing the data needed to create the record on B.
- Create a timer that periodically checks for queued records, if any exist then it attempts to create the record on B. If it succeeds it removes the record from A. If it fails (B is down) then it waits til the next time the timer fires.
|
|
|
|
|
You can have a field in system A to update whether the record is synced with system B or not. Something like,
SyncStatus - Store the status here
or
IsSynced - True or False
The above field should be updated, at last. That is once the record in System A is synced to system B, then update the above field.
Let's talk little advance, You always can roll-back update process, if the web service goes down in middle of the process. That is, hold the sync/insert process. Check the, Entity Framework's - Rollback functionality.
|
|
|
|
|
I would use whatever "high availability" solutions are available for your database server - attempting to write your own is going to be a major undertaking. (If you can't afford any downtime at all then you need to look at clustering.)
For SQL Server there is a good table summary of the available options in the middle of this "Simple Talk" article[^]
|
|
|
|
|
I thought the purpose of entity framework was to discharge his duty to write queries (whether in SQL or LINQ). I worked on the framework for manipulating entities without worrying about queries. I think this is the case of NHibernate. This is not the case with the Entity Framework.
|
|
|
|
|
EF is an ORM. That pretty much sums it up.
|
|
|
|
|
If that is true that you forgot one of two things when using EF
1) you did not call 'SaveChanges()' on your DbContext.
or
2) Your EF configuration has the change tracker disabled.
There is always the possibility that something is just plain broken - like you are modifying properties which are not mapped.
|
|
|
|
|
I think the goal of EF (and any ORM) is to abstract away the detail of the data storage implementation. If you are still thinking in terms of tables and SQL then you haven't abstracted enough
|
|
|
|
|
Duncan Edwards Jones wrote: I think the goal of EF (and any ORM) is to abstract away the detail of the data storage implementation. SQL is already an abstraction.
Duncan Edwards Jones wrote: If you are still thinking in terms of tables and SQL then you haven't abstracted enough If you wrote a DAL for your SQL, add this comment on top: redundant leaky abstraction.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
But neither are enough of an abstraction if you still know how and where the objects get stored.
|
|
|
|
|
Okay, I have a need for a pretty generic Directory Monitor utility.
It will run as a GUI and as a Service. The GUI is for configuring and testing.
The service will run on the active configured items (multiple).
I am stuck on a design implementation issue. It is NOT that important to me,
and that creates the problem. The usages of this "DirMon" are typically to
detect a file has been updated/saved, and to force it into source control.
Furthermore, we use a syntax that normally does not care about the individual
file, it handles the whole directory structure.
Think: Management Folders Sync'ed with other managers through version control.
Not developers.
I face 2 challenges. The first is that I want this to be "better" than that,
because I am so close to a VERY USEFUL end user type tool. The second is that
the process ReadDirectoryChanges() and GetQueuedCompletionStatus() have 3 issues.
1) When overwriting an existing file, I get 2 notifications for that file
2) They can (and will) become lossy if overwhelmed with changes, especially if I
cannot keep up with them.
3) I cannot have a new process start while an existing one is finishing
My quick hack was to simply add a timer in between the notification and me firing
the event. This is far less than a great idea.
After thinking about it, I realized this is a real problem. I am probably missing a
well accepted Design Pattern...
So, that is my question... How would you design this last piece, so that as the
various events fire, that I could keep track of various changes and make sure that
they were dealt with?
[I am willing to add a Flag for the user to choose if they NEED individual or summary
events, so I can know. If summary, I will use my current approach where I just keep
resetting the timer until things calm down. But if they need/want individual events,
what to do?]
Thanks in advance...
|
|
|
|
|
Monitoring a folder with or without its sub-folders is a pretty common task. Many developers experienced the oddities of System.IO.FileSystemWatcher , and you may find some tips & tricks or articles on its mutliple or missing events. Or did you write your own FileSystemWatcher? Note that a to narrow schedule of the polling task may cost to much resources.
add a Flag for the user to choose Are you sure any end user - who is not a developer, you told us - is interested in that? I.e. do you expect them to understand your underlying technical solution? I doubt that. They just need to know if there are files to by synced. By the way, why don't you do that automatically?
|
|
|
|
|
Okay, I am doing this in Delphi (not .NET), so no System.IO... stuff..
Even if I end up configuring that flag for them myself, setting it up on their machine, I am good with it.
Bernhard Hiller wrote: They just need to know if there are files to by synced. By the way, why don't you do that automatically?
That is what I am building for them. It will be configured once, and run as a service. Whenever they save, it will do the appropriate sync. Which, for different clients could be Mercurial, SVN, etc. I am NOT giving them a tool to see what has changed (that is useless, they know what changed, in general). They just forget to Jump into something else and commit. In some cases, the action is triggered by generated reports to a folder.
Since 99% of the use cases don't care about specific files, I will simply ignore that case. I will let it run, and see if a Version 2.0 is needed to handle more.
I think, sometimes, I get in my own way, trying to cover EVERY possible use case...
Thanks...
|
|
|
|
|
The Win32 API you would use is FindFirstChangeNotification[^] and associated api's.
The difficult we do right away...
...the impossible takes slightly longer.
|
|
|
|
|
Kirk 10389821 wrote: They can (and will) become lossy if overwhelmed with changes
People are updating these files and each person has their own files - why would you lose a notification?
Kirk 10389821 wrote: that I could keep track of various changes
When I edit I like to save intermittently just in case the power goes out. How are you going to deal with that behavior especially if the file is large?
Kirk 10389821 wrote: Management Folders Sync'ed with other managers through version control.
And your comments all seem to be about putting the files into the service, but what about getting them out?
As a user it is going to annoy me to no end if my system starts 'pausing' every 5 seconds every time I stop typing for just a millisecond. And it probably absolutely useless to have only piece of some update added to version control.
So a timer seems like a much better solution. Every 5 minutes see if a file has changed more than 5 minutes ago but less than 10 minutes ago. Then sync. And at the same time check for updates to existing documents.
|
|
|
|
|
i wanted to know what is actual in business layer it is only properties or some logic over there
|
|
|
|
|
|
Business layer can be placed in two locations 1. On client side and 2. On server side.
Client side business layer will does all client side logic before sending to service (think the application has both Client and Service) and Server side business layer will deal with logic that is after retrieving data from db or before saving to db.
Layer is nothing but a code/project to do certain actions. Also remember a Business Layer must not have any UI related things. If it has, then it is not a business layer. We have to move those UI related things to some other place most probably to Presentation Layer.
Also on the other hand, if we have only one Client which deals with DB, then there will be only one BL (a business layer) which goes all logic in here.
Hope you understand.
Regards,
Ganesh
|
|
|
|
|
|
No problem Djay...
Regards,
Ganesh
|
|
|
|
|
The business layer defines the rules of the business. So for an accounting application you would expect to see accountancy rules being defined such as VAT rules, stock control rules etc. So yes, you would expect to see logic (rules) being defined in the business layer.
One of the goals of n-tier design is that these rules should be completely encapsulated so that they can reused by other applications e.g. your company web site may use these business rules as well as your desktop application. For this reason it is common to find business rules implemented as services such WCF services in an SOA architecture.
|
|
|
|
|
Hi
Not sure if I am in the correct group.
In the 1970-80s the 4 bit TMS1000 was used in lots of games, Speak and Spell, and consumer products, microwaves etc, including a model railroad controller called the Hornby Zero One.
Would like to upgrade slightly the code for the controller by adding some more locomotive addresses. Has 16, need to go to say 64.
Apparently there is a way to download the existing code thru a test port, does anyone know of this ?
What I would like to have done is to download the code, make amendment for the number of addresses and then upload onto a modern 8-bit chip.
Is this reasonably simple !! Is there someone out there who has done this?
Thanks
Charles Harris
|
|
|
|