|
Dutch? Try your local job-centre.
What part needs "architecting"? AFAIK, there's some simple standards one could choose from, it's not like the very first website that needs "login" and "database". Search for a MVC5-template, and then decide on how much you want to pay me.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
Hi Dutch,
As many people said, need an architect will be costlier.
Probably go for some freelancing sites like elance, freelancer.com, peopleperhour (suits mostly), fierv and some other.
You need to provide some more information regarding your location and what kind of application does you need.
Regards,
Ganesh
modified 30-Sep-14 1:23am.
|
|
|
|
|
Hi. A general question I wanted to ask someone. Thought here would be a good place.
For Stored Procedures (SP's) in a SQL Server Database would you consider it acceptable to leave any error handling to the calling code (i.e. caught in an exception handler in the .Net code), and just check the number of records affected in the .Net code too, for extra validation.
Rather than, handling errors in the SP's and returning a value indicating things like no. of records affected and whether an error occurred whilst trying to execute the procedure.
I'd like to keep the SP's as simple as possible, and handle problems at the code end as it seems easier to do that. But not sure if we're missing a trick with any error handling by the SP's?
Hope I've made myself clear.
Thoughts?
Thanks
J
|
|
|
|
|
julian@giant wrote: Thoughts? Means that if someone else writes an app that accesses the DB, it will not include the error-handling.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
Yes, that did cross my mind. Fortunately, I don't think that's ever going to happen.
|
|
|
|
|
No webservices planned, no API, no addins? Then it still depends on how the database is going to be used.
Most of the problems can be blocked using simple constructions, and most of it will require some handling at a higher level. Prohibiting the insertion of an order-line without an existing order is easily done by defining keys and a relation. Cascading deletes are dandy if you want to delete all order-lines if an order is deleted.
That is assuming you allow access to the tables. It can be beneficial to remove that access and only allow stored procedures. In that case, you probably want to have them "succeed" or "fail" as a single atomic operation, inside a transaction that is either commited or rolled back.
Have a whiteboard? Draw a large T, title the left column "advantages" and the right one "disadvantages"
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
Well, yes we do already use the tables from a web service as well as the website itself, using the same DAL DLLs that throw up any exception to the calling service, which handles the failure accordingly. We are in control of all this and any furture development.
Thinking about, I think I need to double check the SQL transactions we are using, which aren't many, for their return status and how we're handling those, and I'm just writing a function to check the record count from an update/insert or delete, so that that can be handled at code level.
Thanks Eddy for sparking some thoughts in my head...
|
|
|
|
|
An error handler in the calling code is better IMHO. That way, nobody can forget to check the return value to see whether or not the call succeeded.
However, if your SQL code is using a resource that needs to be cleaned up (a cursor, an ActiveX object, a prepared XML document, an app-lock, etc.), then it should include code to clean up after itself before it exits. Unfortunately, SQL's TRY...CATCH block doesn't include a FINALLY clause, so you'll need to duplicate the cleanup code in both blocks.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
Thanks Richard. Glad you agree. We have kept the Stored Procedures fairly straightfoward, so no worries on the tidy up front. Wil certainly take that on board for future development though.
Thanks.
Julian
|
|
|
|
|
julian@giant wrote: acceptable to leave any error handling to the calling code
No. Not in anything non-trivial.
I always write database layers. That layer should handle 'sql' errors. That layer should also be constructed in such a way that it is mindful of potential system errors. For example a user client application might want to tell the user that the server 'was down' by catching appropriate SQL Exceptions and determining that. For certain errors it should log information and then tell the user to 'contact an administrator'.
This layer would also be unit tested independent from the rest of the application (regardless what sort of app it is.)
|
|
|
|
|
Thanks J. The more people comment here, the more confortable I feel about our current approach. Albeit not perfect, I'm pretty sure we're in the right direction.
And yes, we're upping the Unit Tests lately, which I've banged on about loads, and 'time' is finally being properly allocated to it from the 'time' people....
Thanks
J
|
|
|
|
|
I had form a system integration between system A and system B. System A would sync a same record to system B everytime a new record was inserted into a local Database of system A via Web Services. When there is some exception or failure to cause the web services disconnected, system A does not able to sync record to system B because the web services to be consume from system B cannot be reach. Is there any best practice to cater scenario in real life like this ? The new record created in system A cannot duplicate to system B. What if I schedule a job to check any failure record sync to system B and once the web services was back to online and trigger a patching operation to patch the record in system A back to system B?
|
|
|
|
|
low1988 wrote: The new record created in system A cannot duplicate to system B
If that is a hard requirement then you cannot create the record on A unless B is available. Basically you would create it on B first and only then create it on A.
If however there are ways to insure that during creation of the record on B that you could, post process, make B unique then you could proceed with a queuing strategy which would involve the following
- Determine an algorithmic approach to making B unique. This might or might not include some post processing manual intervention.
- Create a data store on A suitable for storing the data needed to create the record on B.
- Create a timer that periodically checks for queued records, if any exist then it attempts to create the record on B. If it succeeds it removes the record from A. If it fails (B is down) then it waits til the next time the timer fires.
|
|
|
|
|
You can have a field in system A to update whether the record is synced with system B or not. Something like,
SyncStatus - Store the status here
or
IsSynced - True or False
The above field should be updated, at last. That is once the record in System A is synced to system B, then update the above field.
Let's talk little advance, You always can roll-back update process, if the web service goes down in middle of the process. That is, hold the sync/insert process. Check the, Entity Framework's - Rollback functionality.
|
|
|
|
|
I would use whatever "high availability" solutions are available for your database server - attempting to write your own is going to be a major undertaking. (If you can't afford any downtime at all then you need to look at clustering.)
For SQL Server there is a good table summary of the available options in the middle of this "Simple Talk" article[^]
|
|
|
|
|
I thought the purpose of entity framework was to discharge his duty to write queries (whether in SQL or LINQ). I worked on the framework for manipulating entities without worrying about queries. I think this is the case of NHibernate. This is not the case with the Entity Framework.
|
|
|
|
|
EF is an ORM. That pretty much sums it up.
|
|
|
|
|
If that is true that you forgot one of two things when using EF
1) you did not call 'SaveChanges()' on your DbContext.
or
2) Your EF configuration has the change tracker disabled.
There is always the possibility that something is just plain broken - like you are modifying properties which are not mapped.
|
|
|
|
|
I think the goal of EF (and any ORM) is to abstract away the detail of the data storage implementation. If you are still thinking in terms of tables and SQL then you haven't abstracted enough
|
|
|
|
|
Duncan Edwards Jones wrote: I think the goal of EF (and any ORM) is to abstract away the detail of the data storage implementation. SQL is already an abstraction.
Duncan Edwards Jones wrote: If you are still thinking in terms of tables and SQL then you haven't abstracted enough If you wrote a DAL for your SQL, add this comment on top: redundant leaky abstraction.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
But neither are enough of an abstraction if you still know how and where the objects get stored.
|
|
|
|
|
Okay, I have a need for a pretty generic Directory Monitor utility.
It will run as a GUI and as a Service. The GUI is for configuring and testing.
The service will run on the active configured items (multiple).
I am stuck on a design implementation issue. It is NOT that important to me,
and that creates the problem. The usages of this "DirMon" are typically to
detect a file has been updated/saved, and to force it into source control.
Furthermore, we use a syntax that normally does not care about the individual
file, it handles the whole directory structure.
Think: Management Folders Sync'ed with other managers through version control.
Not developers.
I face 2 challenges. The first is that I want this to be "better" than that,
because I am so close to a VERY USEFUL end user type tool. The second is that
the process ReadDirectoryChanges() and GetQueuedCompletionStatus() have 3 issues.
1) When overwriting an existing file, I get 2 notifications for that file
2) They can (and will) become lossy if overwhelmed with changes, especially if I
cannot keep up with them.
3) I cannot have a new process start while an existing one is finishing
My quick hack was to simply add a timer in between the notification and me firing
the event. This is far less than a great idea.
After thinking about it, I realized this is a real problem. I am probably missing a
well accepted Design Pattern...
So, that is my question... How would you design this last piece, so that as the
various events fire, that I could keep track of various changes and make sure that
they were dealt with?
[I am willing to add a Flag for the user to choose if they NEED individual or summary
events, so I can know. If summary, I will use my current approach where I just keep
resetting the timer until things calm down. But if they need/want individual events,
what to do?]
Thanks in advance...
|
|
|
|
|
Monitoring a folder with or without its sub-folders is a pretty common task. Many developers experienced the oddities of System.IO.FileSystemWatcher , and you may find some tips & tricks or articles on its mutliple or missing events. Or did you write your own FileSystemWatcher? Note that a to narrow schedule of the polling task may cost to much resources.
add a Flag for the user to choose Are you sure any end user - who is not a developer, you told us - is interested in that? I.e. do you expect them to understand your underlying technical solution? I doubt that. They just need to know if there are files to by synced. By the way, why don't you do that automatically?
|
|
|
|
|
Okay, I am doing this in Delphi (not .NET), so no System.IO... stuff..
Even if I end up configuring that flag for them myself, setting it up on their machine, I am good with it.
Bernhard Hiller wrote: They just need to know if there are files to by synced. By the way, why don't you do that automatically?
That is what I am building for them. It will be configured once, and run as a service. Whenever they save, it will do the appropriate sync. Which, for different clients could be Mercurial, SVN, etc. I am NOT giving them a tool to see what has changed (that is useless, they know what changed, in general). They just forget to Jump into something else and commit. In some cases, the action is triggered by generated reports to a folder.
Since 99% of the use cases don't care about specific files, I will simply ignore that case. I will let it run, and see if a Version 2.0 is needed to handle more.
I think, sometimes, I get in my own way, trying to cover EVERY possible use case...
Thanks...
|
|
|
|
|
The Win32 API you would use is FindFirstChangeNotification[^] and associated api's.
The difficult we do right away...
...the impossible takes slightly longer.
|
|
|
|